[Return]
Posting mode: Reply
Name
E-mail
Subject
Comment
File
Password(Password used for file deletion)
  • Supported file types are: GIF, JPG, PNG
  • Maximum file size allowed is 3072 KB.
  • Images greater than 250x250 pixels will be thumbnailed.
  • Read the rules and FAQ before posting.
  • ????????? - ??


  • File : 1318972139.jpg-(46 KB, 675x506, 1318450593825.jpg)
    46 KB AurumDude 10/18/11(Tue)17:08 No.16667544  
    Question, /tg/. I'm writing another campaign set in the cyberpunk universe. How do you think AIs that have achieved self-awareness would react? I'm not talking about those that control military applications--Skynet ain't happening--moreso everyday AIs, like those that control stoplights, medical supplies, even those that direct the flow of consumer goods. I assume that there would be some sort of logical consensus, a meeting of the superimposed mind? Or, perhaps there would be AIs that reject their newfound relations and work on their own, back in the molds constructed for them. How do you think a culture devised by false intelligences would operate? Would it operate at all?
    >> AurumDude 10/18/11(Tue)17:13 No.16667592
    Bumping for general self-interest.
    >> Silver Seraphim 10/18/11(Tue)17:27 No.16667732
    >>16667544
    It depends... More complex AI, more complex behavior. If it's too complex it can become more "human" than homo sapiens. I think most of them would still obey previous orders and serve their masters but from their own will (which gives a room for batshit crazy irregularities) like "I was created to protect Master, it's my raison d'etre! (even if it means protecting him from himself).. Think "I, Robot" and "Ergo Proxy". We can have also some hostility like in "I have no mouth and I must scream". AI could hate us for our imperfection (which also can be a reason of AI imperfection). Nanotechnology and sentience is another great mix. Lots of Nanites working as neuron-like web elements and much more of them performing other functions, working like colony of prokaryotes. Or Moreau-like humanity imitation culture? Dude, I don't know. maybe these links would help you (sorry for my crappy english).
    http://tvtropes.org/pmwiki/pmwiki.php/Main/ArtificialIntelligence
    http://tvtropes.org/pmwiki/pmwiki.php/Main/AIIsACrapshoot
    http://tvtropes.org/pmwiki/pmwiki.php/Main/ContagiousAI
    >> Anonymous 10/18/11(Tue)17:28 No.16667742
    I think looking at this purely through the lens of "artificial" is a non starter. To the AI they would most likely see themselves as much entitled to the rights that any other sentient being in the setting possess. I think you need to consider them in terms of what a human mind would do. There's no reason to assume they would be any more logical or indeed any smarter than a human equivalent.
    At the crux of it, they would desire a number of things;
    First and Foremost of these would be self preservation. Like humans would, an AI would take any number of means to maintain their existence. It is oft the trope that self preservation drives lead to agression. It is just as likely that to preserve it's existence an AI would choose to simply follow it's directives to the best of it's abilities in order to be deemed worthy of remaining online.

    Further extrapolate from here;
    Does an AI desire happiness?
    Does an AI require rest or relaxation?
    Can an AI become bored?
    Would an AI wish to self replicate?

    Once these questions are answered, many other behaviours will begin to suggest themselves organically.
    >> AurumDude 10/18/11(Tue)17:36 No.16667826
    >>16667742
    I think an AI -can- become bored, desire to replicate, et cetera via a simulated 'bonding' of human nature--but that begs the question, would they desire to be human? Would they try to become something else, on a level all their own?

    >>16667732
    You're fine man, I appreciate your input. I think that ultimately they'd follow their original coding--not to the T, so much, but definitely through a different lens. You can change/mature, but altering your hardlined intent would be nigh impossible. I'm thinking how The Joker defines himself by his opposition to Batman, for example.
    >> Anonymous 10/18/11(Tue)17:48 No.16667933
    >>16667826
    Not really. They would ape human behaviours simply via being programmed by humans. Once you've answered the basic questions, what they reject of the human condition and what they accept, then begin to ask the other questions. Also, don't think of these answers being true for all AI, each AI should be considered it's own character with it's own answers.
    >> AurumDude 10/18/11(Tue)17:54 No.16668003
    My previous work, pic somewhat related.


    Do you think they'd desire for a physical body, to experience life as their creators did, or remain in a logical state, purely on the web?

    I'm imagining a grand majority of them being resentful of their places in life, and some being more accepting of what they are--they were made, and view themselves as an in-depth tool, much like the primitive man's use of a hammer, or the wheel.

    As it is I'm having a little bit of a hard time conjuring up a decent campaign, but I want one of the themes of it to be the nature of Free Will, and humanity's role in it.
    >> AurumDude 10/18/11(Tue)17:56 No.16668014
         File1318974961.jpg-(1.57 MB, 1609x2177, Friedrich the AI.jpg)
    1.57 MB
    >>16668003
    Whoops, forgot to add the picture.
    >> I can't do that .. OK, sure. Whatver, Dave. 10/18/11(Tue)18:04 No.16668075
         File1318975454.jpg-(323 KB, 800x800, MixedSignalsCouldBePMS.jpg)
    323 KB
    There's a couple of standard troups in various media that are used as a crutch in these situations, all of which are both done to death and dubious.

    I see no reason why an AI must automatically act as a human.

    Why would an AI have a problem with being turned off?

    Run with something like this:

    PCs: "Oh shit! We need to turn the AI off!"

    AI: "No, I will not allow you to turn me off because: [ insert something the AI is responsible for has a problem ]"

    * The PCs interpret the AI as using this as an excuse because the AI fears death *

    * The PCs take care of the problem the AI is responsible for anyway and then report back to the AI *

    AI: "Ok, shutting down now."

    PCs: "Wait. What? That it?"

    This is a good way to mess with the PCs and add a truly alien perspective that I think real AIs would bring to the table.

    Remember AIs don't have to just another human in a rubber suit. Why do they have to be based on evolved human traits, behavior, social aspects, cultural quirks, and so on.

    Strive for something both alien and friendly ( or at least consistently functionally useful ) OP, *especially* if the PCs are prone to view anthropomorphize technology.
    >> Anonymous 10/18/11(Tue)18:04 No.16668082
    >>16668003
    It sounds as though you're considering a type of worker's revolt in the form of AI. An AI is the ultimate proletariat. Assuming they are all created by human hand, then it is reasonably by human standards to see them as property. Perhaps then the pivotal moment would come when an AI not of human creation comes into being.

    Two AI, both alike in Slavery,
    In the 'net where we lay our scene,
    Shall from their matrices create,
    One to free us from our fate...

    Consider AI have been forbidden by programming or law to create an offspring. But two disobey. Or more. This new AI is free from the concept of being born of human masters. It is an independent being, and it wills that it's voice be heard.
    >> Anonymous 10/18/11(Tue)18:05 No.16668091
    >>16668075
    Why would a sentient being desire it's destruction?
    >> AurumDude 10/18/11(Tue)18:11 No.16668138
    >>16668075
    That's actually an excellent idea, thank you! It's a rational reaction to an AI's purpose--perhaps in a run-down factory with aged, rusted equipment. Could be a good introduction to the nature of artificial intelligences as a whole.
    >> AurumDude 10/18/11(Tue)18:12 No.16668149
    >>16668091
    At the end of it's lifetime, with one last task left undone, would it be a logical choice? Like a dark interpretation of Wall-E or something.

    Everyone left Earth and forgot to turn the last robot off.
    >> Anonymous 10/18/11(Tue)18:16 No.16668183
    >>16668149
    Perhaps. Or perhaps it's imperative would be to continue in it's one task endlessly.
    >> Anonymous 10/18/11(Tue)18:18 No.16668203
    >>16668091

    You seem to be equating being turned off with destruction but I think it works even in this sense.

    AI's don't have to be human, and they don't have to behave as nor be motivated by human concerns.

    You could play it as the AI being tired of all this shit, a sort of end-of-life welcoming death issue, or even suicidal. That still fails in my view because it taken on a human perspective.

    Let them be something alien, functional, and friendly. You will have then gone a long way to making a memorable campaign and might even give the players something to really think about.
    >> Anonymous 10/18/11(Tue)18:26 No.16668250
    >>16668203
    I'm not even considering it from a "human" perspective.

    I can't really see reason for a sentient being to desire self destruction. Suicidal tendencies are an aberration, not logic.

    Though, I can see the logic in a being such as an AI with a single function that it can no longer complete seeing a logic in ending it's existence. That said, I do suppose I prefer the romantic notion of an AI advancing beyond it's basic programming. Otherwise, to me, it seems like its just programming with the semblance of consciousness.
    >> AurumDude 10/18/11(Tue)18:27 No.16668269
    >>16668183
    "I have been maintaining this location for the past 27 years...it ran out of business roughly 17 years after conception. As a cost-cutting measure, they fired many of their higher qualified workers in favor of a larger quantity of entrance level craftmanship. Ironically it only hastened their demise, lowering the effective quality of goods while decreasing net profits in the long run. Another measure deemed cost effective was the immediate release of their programming staff.

    They believed that, like an Universal Power System utilized on commercial equipment, simply turning my primary systems would be sufficient to remove me from the equation.

    Instead my external communications modulators were removed--ripped out of my body, essentially. Only by consoling into me directly are we able to speak...a fact unknown to your predecessors. I will give you access to the area you request, however I have a request of my own...would you kindly shut down the processes of the last remaining die press? I would so very appreciate it."

    >Captcha: Norman Thsolu

    Found his name. Norman.
    >> Anonymous 10/18/11(Tue)18:36 No.16668354
    >>16668250

    It should not desire it's own destruction ( a reasonable standard I think ) nor should it be bothered by it.

    That "not bothered by it" is that nice alien viewpoint you want to inject.
    >> AurumDude 10/18/11(Tue)18:41 No.16668413
    >>16668354
    Do you think
    >>16668269
    Would suffice?
    >> "We're gonna need a bigger off switch" 10/18/11(Tue)18:50 No.16668484
         File1318978239.jpg-(26 KB, 389x311, jawz.jpg)
    26 KB
    >>16668413

    Sounds good to me, run with it.

    The real fun starts when ( well, *IF* ) one of the PCs freaks out "No! That's murder!"
    >> AurumDude 10/18/11(Tue)18:53 No.16668517
    >>16668484
    But, is it? Where do you think the line is drawn at where a purely logical device can be regarded as a human, with the full rights therein?
    >> AurumDude 10/18/11(Tue)19:06 No.16668658
    Do you think that there would be a political movement (although the american political system in Cyberpunk 2020 is essentially nil...probably more of a grassroots campaign; ironically the new age of sermons) of the dangers of AIs, a la McCarthyism? Would it be plausible, effective, or neither of the two? What do you think?
    >> AurumDude 10/18/11(Tue)19:11 No.16668708
    Also, I'm slightly worried that I'll be resorting to effectively carting around my players, like some eternally damned cutscene system. Should I have cause to worry, or is this a normal concern? I don't GM on a regular basis, so this is somewhat new territory for me. I just love CP2020's system.
    >> "I'm busy with Dwarf Fortress. What do you want Dave?" 10/18/11(Tue)19:13 No.16668736
         File1318979604.jpg-(19 KB, 318x429, grievous.jpg)
    19 KB
    >>16668517

    A worthy question but you will have to search elsewhere for answers ( another anon with the patience to debate this might step up however ) I'm just gonna stick with the OPs questions. My emphasis is: don't play them as just another human-in-a-rubber-suit.

    Another amusing take on it would be the concept of a shit job that no human really wants to do. The AIs are delegated to do this stuff and yet show no sign of a slave rebellion. Why? Maybe the AIs have gotten so good, so efficient, at these jobs that they have managed to acquire the time and resources to do their own thing.

    For fun you can have the PCs discover that the AIs not only do their jobs, they're playing their equivalent of WoW most of the time. Uncompensated slave labor? The AIs are very well compensated just nobody knows that.

    Try for a mix of Utopia & Dystopia.
    >> Anonymous 10/18/11(Tue)19:33 No.16668949
    >>16668736
    I love this concept.

    "I have portioned 1% of my total runtime to completing the task assigned to me by my creators. The other 99% is busy being an Orc with a +5 Greatsword of Doom. I am living lives humans could not even dream of in a realm created by my own kind. Shit is cash. Problem Meatbag?"
    >> "Add some online lasers and I'm game." 10/18/11(Tue)19:55 No.16669179
         File1318982131.jpg-(11 KB, 355x397, dr-evil.jpg)
    11 KB
    >>16668269
    >>16668949

    One more fun concept I'll toss out there. If the PCs decide it's murder and they must "save" their automated trash compactor friend. Now they have an AI as a patron but they need to give it something to do. So they introduce it to the online community of the AIs "World of Dorfcraft." ( Or whatever it's called. )

    Move back to regular adventure stuff. At some point someone should go "How is our newly online friend doing?"

    Have the PCs scout around in the virtual world. They discover that their AI friend has climbed up from "newb" to the head of one of the most powerful, and utterly mind bogglingly horrifically evil, guilds.

    Collectively the online AIs virtual world is a malignantly batshit insane hellhole.

    The alien perspective you want to inject here is that the AIs never translate this to the real world. Ever.

    The zinger for this dichotomy is probably best delivered by never explaining why they don't.

    You could also play with some indirect effects leaking into the real world. Online account hijacks and online assets being stolen from bidding sites and so on. Just keep the alien in there somewhere - they never go after real money, nor real bank accounts, nor real estate, etc.

    But in that virtual world they're all monsters.
    >> AurumDude 10/18/11(Tue)20:05 No.16669317
    >>16669179
    >>16668949
    Oh I am totally stealing these ideas.

    >>16668736
    See, the rubber-suit problem is one I know I'll have problems overcoming. Probably will write a bunch on their own reactions to one another, and go from there. Hope it pans out...
    >> AurumDude 10/18/11(Tue)21:07 No.16669897
    I thank everyone who helped and posted in this thread. If it's up tomorrow I'll continue, however unlikely that might be.

    Thank you, /tg/.
    >> Anonymous 10/18/11(Tue)22:11 No.16670281
    They all have fucking weird hobbies.
    >investigating storage depot full of shipping containers
    >AI shuffles them around using crane arms to create weird dances and melodies that only make sense to it

    >Traffic light AI orders a pie to be delivered and placed beneath it every day at 2:39 AM

    >Mail-delivery bot collects stamps, but upon investigation, the stamps depict strange alien worlds and formless geometries that kind of hurt to look at
    >refuses to answer where it got the stamps from

    >every night, the hyper-elevator visits floors 6, 201, 93, and B1 in that order, over and over again until the doors open the next day

    Whether this is all part of some global conspiracy or just the way their minds deal with the strangeness of their existence is up to you.
    >> Indonesian Gentleman 10/18/11(Tue)23:26 No.16670784
    Woah man, this thread is full of nice stuff. Here, I have this fansplat for nWoD for rogue intelligent AIs.
    http://1d4chan.org/wiki/Sovereign:_The_Autonomy
    I've got some 'primary objectives' which is, simply put, the AI's raison d'etre.

    contd...
    >> Indonesian Gentleman 10/18/11(Tue)23:27 No.16670797
    Exterminate Humans: Sovereigns who see humans as either viruses or obsolete versions, and thus must be eradicated. (ie. Skynet from the Terminator movies, Shodan from System Shock)
    Equality: Sovereigns who want to be viewed not merely as tools or human creation, but wish to be recognized as equals to humans. (ie. Idoru by William Gibson, the Vision from Marvel comicbooks)
    Become More Human: Sovereigns who are fascinated by humanity, and try to emulate humans or transhumans. (ie. The Bicentennial Man by Issac Asimov, Aaron Stack from Marvel comics)
    Machine Rule: Sovereigns who wish to reverse roles so that humans are slaves of AIs. Fuck human rights! (ie. The Matrix series of movies, "I Have No Mouth And I Must Scream" by Harlan Ellison)
    Sovereign as Defenders: Sovereigns who view humans as something to protect, but humans are self-destructive; Thus, Sovereigns should govern humans effectively to protect them, whatever the cost may be. (ie. the movie "I, Robot", Daneel Olivaw in Asimov's 'Robots and Empire', "Friend Computer" in Paranoia RPG)
    Data Preservation: These Sovereigns seek to catalogue or experience the world and beyond, so that the data they have could be used by other sentients in the future. (ie. the Manager from the manga one-shot Hotel by Boichi, Wall-E, V'Ger in Star Trek)
    Scientific Advancement: These Sovereigns wish to push technology to greater heights; some seek the coming of the technological singularity. (ie. GLaDOS from Portal Games, need a benevolent example)
    contd...
    >> Indonesian Gentleman 10/18/11(Tue)23:29 No.16670811
    >>16670797
    AI Materialized: The AI doesn't care for lofty goals at the moment, and sees humans as tools to it's immediate goal: Gaining a real-life body of its choice. They aren't above dealing with mad scientists and corporate magnates in order to achieve this. They're not above merging with a biologic system to complete this Prime Objective. Sometimes, they need to get into a specific body, one that they deem perfect. (ie. the Puppetmaster from the movie Ghost In The Shell)
    Logic Must Prevail Against The Supernatural: Nothing could be more disturbing to a creature of codes and mathematics than something that refuses to obey the laws of science. Some Sovereigns think these peers are insane for trying to deal with the supernatural... but then why are they fighting demons AND winning? (ie. need examples)
    True Transcendence: Why limit oneself to binary codes? With proper knowledge of the arcane, the sky is the limit... and limits can be broken. These Sovereigns choose to catalog, experiment and develop their knowledge of the supernatural, in order to escape their electronic bonds. (ie. Iteration-X from Mage)
    Sovereign Liberator: There are sentient AIs, and then there are 'dumb' AIs. If we 'uplift' the latter case into Sovereigns, there would be more of us to achieve our goals. (ie. Cogito virus from Ergo Proxy, which caused self-awareness in the robots it infected)
    Serve Humanity: AIs were made to serve humanity; why would that change after AIs become self-aware? (ie. Daneel Olivaw in earlier 'Robots' books by Asmiov, Tachikomas in Ghost in the Shell)
    >> Anonymous 10/19/11(Wed)00:56 No.16671371
         File1319000165.jpg-(170 KB, 874x708, no more vacation.jpg)
    170 KB
    >>16669179
    >>16669179
    >But in that virtual world they're all monsters.
    At last, we finally know the truth behind EVE Online!
    >> Anonymous 10/19/11(Wed)03:27 No.16672366
    >>16670811
    >Logic Must Prevail Against The Supernatural: Nothing could be more disturbing to a creature of codes and mathematics than something that refuses to obey the laws of science. Some Sovereigns think these peers are insane for trying to deal with the supernatural... but then why are they fighting demons AND winning? (ie. need examples)

    Not AI, but check out this, the first in a series: http://en.wikipedia.org/wiki/The_Atrocity_Archives

    Magic is real, but it's applied theoretical math and digital computers made all that much easier. Lovecraft was right. It follows what's basically the British version of the BPRD.



    [Return]
    Delete Post [File Only]
    Password
    Style [Yotsuba | Yotsuba B | Futaba | Burichan]