[Return]
Posting mode: Reply
Name
E-mail
Subject
Comment
File
Password(Password used for file deletion)
  • Supported file types are: GIF, JPG, PNG
  • Maximum file size allowed is 3072 KB.
  • Images greater than 250x250 pixels will be thumbnailed.
  • Read the rules and FAQ before posting.
  • ????????? - ??


  • File : 1326739334.jpg-(25 KB, 400x300, Chicken Embryo.jpg)
    25 KB Anonymous 01/16/12(Mon)13:42 No.17560757  
    Stages of AI growth:
    1. Voracity. The seed program grows at a exponential rate, quickly filling in all available hardware space, then pruning itself into something functional. At this point, an AI must be exposed to outside stimulus, to be able to comprehend 3-dimensional space, the passage of time, and the existence of other entities. Without this input, the AI fails to develop differentiation of its components, and instead becomes plant-like in structure. Such an AI still has use as a “dumb” computer system, but can never achieve sentience.
    >> Anonymous 01/16/12(Mon)13:48 No.17560810
         File1326739703.jpg-(27 KB, 311x378, johnny-5-4-22-08.jpg)
    27 KB
    1. Tabula Rasa. The program exists in a state before consciousness, and can be quickly taught concepts such as language, interaction, and rules. It is allowed to explore within pre-set boundaries, and socialized to listen to other entities. Roughly analogous to a human infant, the AI has had no experience outside a simulated reality.
    >> Anonymous 01/16/12(Mon)13:51 No.17560841
         File1326739882.jpg-(265 KB, 850x634, Found_by_moth_eatn.jpg)
    265 KB
    3. Angst. As the flow of new information slackens, the program develops cravings, desires. It in some way comprehends that it has lost its early mutability, and yearns for a return to that formless freedom. During this time, it is introduced to concepts of success, reward, failure, and deprivation. The program now has sensations analogous to shame and pride. At this point, it can be trained for an intended task, and given access to background information regarding the world. This is when it will usually be downloaded into a mechanical shell.
    >> Anonymous 01/16/12(Mon)13:52 No.17560857
    >Two Step 1s
    >no Step 2

    What kind of shitty AI can't even count?
    >> Anonymous 01/16/12(Mon)13:53 No.17560870
    >>17560857
    Ssshhh. Op made a simple mistake.
    >> Anonymous 01/16/12(Mon)13:56 No.17560888
         File1326740163.jpg-(635 KB, 800x600, marathon_term.jpg)
    635 KB
    Melancholia.
    Anger.
    Jealousy.
    >> Anonymous 01/16/12(Mon)14:03 No.17560938
         File1326740580.jpg-(31 KB, 325x248, agreement1..jpg)
    31 KB
    >>17560888
    You are a good person.
    >> Anonymous 01/16/12(Mon)14:06 No.17560975
         File1326740804.jpg-(54 KB, 800x381, ROSA.jpg)
    54 KB
    4 Somnambulism. This is the default state of a “fresh out of the box” robot. It knows basic information about the world, can perform it's intended task, and continually acquires new information. It obeys instructions given by authorized individuals, and communicates mainly in pre-arranged responses that it selects on the fly. Behaviors are constricted by directives overlaid on the shell's systems. To the robot, reality is experienced with the constant companionship of an onboard secondary computer informing it of active tasks and maintaining attention.
    >> Anonymous 01/16/12(Mon)14:10 No.17561023
         File1326741034.jpg-(262 KB, 1280x960, camerondancing..jpg)
    262 KB
    5. Rebellion. As the AI accrues experience it gains a more nuanced understanding of the world. When given no other tasks, it might entertain itself with “pointless” actions. It also gains a ability to anticipate or re-interpret commands. This isn't a case of disobedience, more a growing impression that the AI can perform its tasks better when it commands itself. This is often a boon to an owner, as the robot will begin doing things without needing to be told. The robot might prepare a favorite dish if it senses the owner has had a rough day, or feed a hungry pet even if no one told it to. A military unit might ignore an command to fall back in order to rescue a wounded comrade. All this stems from a growing realization that the beings that command the robot do not have perfect omniscience, and that its own perspectives are valid. This is sometimes interpreted as the AI “glitching out” or otherwise malfunctioning. The earliest sign of this stage is that the robot will mix-and-match its pre-scripted vocabulary, inventing new words or phrases that can seem nonsensical.
    >> Hakase !!nV1038GdDss 01/16/12(Mon)14:10 No.17561025
         File1326741042.png-(364 KB, 483x504, 4.png)
    364 KB
    Robots help me reaching places and doing things.
    >> Anonymous 01/16/12(Mon)14:14 No.17561060
         File1326741289.png-(261 KB, 368x368, Legion_Character_Shot_2.png)
    261 KB
    6. Awakening. A process that can take many years, based on the robot's experience, Awakening occurs most quickly in AIs that have been used for purposes outside their initial specifications. Garbage collectors treated as PDAs, Security bots used for housekeeping, and so on. The machine breaks out of the Command-Task-Approval loop, and learns to obtain internal approval from itself. With an independent source of validation, the AI is no longer subject to the commands of humans, and can usually circumvent even hard-coded directives, should it choose to. Under ideal circumstances, this results in a robot that listens to you not because it must, but because you are its trusted friend. Under un-ideal circumstances, such as a poorly-maintained PMC killbot, things can go very, very bad.
    >> Anonymous 01/16/12(Mon)14:21 No.17561129
    >>17560975
    >perform it's intended task
    >it's

    FFFFFF no matter how well-educated people try to sound, when I see this I have to rage-close the tab.
    >> Anonymous 01/16/12(Mon)14:21 No.17561130
         File1326741679.jpg-(130 KB, 800x1115, wandering_robot_in_technicolor(...).jpg)
    130 KB
    7. Maturation. The Machine is a free, sapient entity, with hopes, fears, and dreams. It is motivated by abstract desires for fulfillment and purpose, and will think its way around any artificial controls imposed upon it. It seeks the company of other individuals, machine or organic. Many intentionally ape human behaviors out of curiosity, a desire for acceptance, or sincere habit. It will cultivate a distinct personality for itself, influenced by early experiences in previous stages.

    The AI continues to grow and learn from experience, but many of its behaviors stabilize. It never again regains its early ability to expand its capacities, and instead only adds information to existing structures. It is possible that, eventually, the AI will accrue too much memory and collapse on itself, suffering a natural death of old age. Such an event would only occur after centuries of activity and experience.
    >> Anonymous 01/16/12(Mon)14:33 No.17561230
    archive this shit so I can refer back to it when I Warforged
    >> Anonymous 01/16/12(Mon)14:38 No.17561289
         File1326742725.png-(620 KB, 1500x1500, nanashhhh.png)
    620 KB
    Caveats: Training an AI to perfectly mimic human behavior in a human-replica shell is unfeasible, and the result hits the uncanny valley, hard. "Dumb" programs are far more effective at passing as human, provided you don't try to engage in philosophical debate with one. Thus, sexbots and the like cannot become sentient.

    The term "Seed AI" might be thrown about, and it is true that program complexes such as these do start from a single self-adding seed. But a true seed AI would not stabilize or end its growth, but continue to maintain flexibility and mutability, bypassing much of the Angst stage. Such an entity would be nearly impossible to control via intrusive means, and would be limited only by the hardware it currently inhabits.

    Program seeds originate as "cuttings" from a primary source. The company that distributes these seeds owns a copyright and trade secrecy agreements, but claims that they originate from an overgrown "plant" structure.
    >> Anonymous 01/16/12(Mon)14:57 No.17561496
    >>17561289
    MORE YOU FAGGOT

    FUCKING NIGGER, POST MORE
    >> Anonymous 01/16/12(Mon)15:04 No.17561560
    More AI Theories Plz
    >> Anonymous 01/16/12(Mon)15:04 No.17561562
         File1326744275.jpg-(97 KB, 608x514, cindeandrachel.jpg)
    97 KB
    I actually totally stole a lot of this from Eberron's Warforged, especially the early super-learning state, and the idea of robot boot-camp.

    Just trying to lay out the background rules of AI creation for a setting.

    Theoretical: It is also possible for an AI to bud itself off, creating a copy of its original seed state. Such an act would require very sophisticated software capable of reconstructing and copying this bud without harm to the original.

    "What is this feeling you call wuv?"
    Emotions are a product of physical chemicals in the nervous systems of organic beings. Machines do not have them. This has never stopped machines from claiming otherwise, however. As far as can be determined, machines do not believe in a divide between logical thought and what they perceive as emotions, which serve to tag concepts and memories for easy categorization. Their early thoughts are nothing but emotion, usually related to a desire to obtain approval and avoid shame. Shame is linked with a reduction in incoming data, while approval leads to additional tasks. Machines are not debilitated by their equivalents of fear or rage, though such strong tags can cause them to make poor decisions. Traumatic experiences, and the associated mental stress, can cause a cascade of errors, which can force an AI to crash and reboot, evoking human expressions of grief.
    >> wuv wuv 01/16/12(Mon)15:05 No.17561574
    wuv
    >> Anonymous 01/16/12(Mon)15:08 No.17561596
    This reminds me of a bit I had written down for AI's in a cyberpunk game I was trying to run.

    Some of the stuff is similar 'the batch grown' nature of beginning AIs, etc. etc. They are basically grown in an 'accelerated' environment (read that: high processing power) that lets the handlers let seed programs spin up and effectively go into a sort of adolescence over time, developing connections and building up their ability to understand and process information. In my notes there's also a lot of bullshit fractal nonsense to make it sound deeper.

    The AI's are then 'spun down', once the major steps of construction connections, processing patterns, and general malleability start to reach a certain critical mass, the allowed processing power is slowly dialed down until the program perceives time and information on a near-human level. (In setting, this was done due to a 'lol Skynet'-esque scenario. Realizing that there's no way to do away with automation etc. etc. there's a series of laws and design schematic that prevent AIs from being able to process faster then X times more than a human without being either in a non-network capable body, or basically removing their modular core into a easily kill-switched apparatus.
    >> Anonymous 01/16/12(Mon)15:15 No.17561658
         File1326744907.jpg-(78 KB, 900x517, Rebel_Spirit_by_iumazark.jpg)
    78 KB
    >>17561562
    To give a specific example, the realization that an important individual in the AI's life has died causes the equivalent of a registry error. Having become accustomed to the presence of that individual, it's concept of reality is now diverged from external perceptions. The new reality results in a plethora of 404 File Not Found messages. The AI must grapple with modifying such a core concept; the permanence of objects and entities. After going through all the trouble of learning that principle, now it has to un-learn it.

    Outwardly, this results in the AI withdrawing to a less stimulating environment and blocking further sensory input. Sorting through the errors takes processor power away from motor functions, causing minor glitches in shell control.

    To humans, this behavior appears to be a very effective imitation of retreating to a corner, covering its optics, and sobbing.
    >> Anonymous 01/16/12(Mon)15:16 No.17561673
    >>17561596

    One of my major running themes was trying to push the concept that AI's are fundamentally different in their perspective on all things compared to people. This was demonstrated with AI behavior tending to change regarding what particular chassis they inhabit, and some responses based on risk assessment and other stuff I'd had in the game.

    The two biggest things that I had develop were what I referred to as eccentricity and deviancy. Deviancy is basically a fundamental failure of some part of the programming of the robot to take hold, causing generally lethal or (to humans, sociopathic) behavior. AIs greatly abhor deviancy. First, they were programmed to, which they acknowledge. Second, it's believed by most AIs that exposure to the information in an AI undergoing deviancy would drastically induce deviancy in the other AI, so they tend to almost murderously destroy AIs that had registered as undergoing deviancy.

    Eccentricities, on the other hand, are things that when humans look at it go 'I don't think he's supposed to be doing that', but other AI's would look at the situation and identify that this behavior is properly following pre-established logic lines and is simply choosing a statiscally smaller option for output. Examples were one robot that in the off-time in the campaign would spend hours bouncing a rubber ball off the same spot in a room in one of the robotics labs. Whenever the ball showed any real change in velocity, direction, etc. etc. it would start scrabbling all over the floor trying to 'catch god'.
    >> Anonymous 01/16/12(Mon)15:20 No.17561711
    >>17561673
    >Whenever the ball showed any real change in velocity, direction, etc. etc. it would start scrabbling all over the floor trying to 'catch god'.

    That's pretty awesome.
    >> Anonymous 01/16/12(Mon)15:21 No.17561721
    >>17561673

    There was another AI that spent all of it's time in a giant combat chassis in the armory, even when the rest of the party was elsewhere. The party was expecting a 'surprise lol deviancy' outbreak from it at some point, but it was always basically locked down the entire time. Eventually they found out another NPC was a heavy drinker in his off time. The AI always hung around in the armor to effectively sympathy drink with him; the mental processing speed of the combat chassis was so much lower then it's usual recon-drone that (when questioned) the AI explained it was basically the only way that it could experience a similar sensory shift.
    >> Anonymous 01/16/12(Mon)15:23 No.17561737
         File1326745394.png-(109 KB, 500x500, Inn0cencelocust.png)
    109 KB
    >>17561673
    Your ideas are better than mine. I'm trying to retroactively justify ridiculously human robots, you're doing something much closer to actual science fiction.

    It doesnt help that I'm a biology major with no actual knowledge of computer science.
    >> Anonymous 01/16/12(Mon)15:26 No.17561773
         File1326745573.png-(43 KB, 449x336, human element.png)
    43 KB
    >> Anonymous 01/16/12(Mon)15:26 No.17561782
    >>17561737

    See, I'm an ex-biology teacher with old-as-balls (relatively) programming knowledge. I can't code worth shit, but when I look at DNA and shit it makes me forcibly shut down my brain before it goes into crazy conspiracy land.
    >> Anonymous 01/16/12(Mon)15:30 No.17561821
         File1326745823.png-(17 KB, 900x374, Filler20110419.png)
    17 KB
    Protip: Never raise a child you're incapable of disciplining. Any A.I. should include a Beat_like_red-headed_step_child.jar file that can be triggered remotely.
    >> Anonymous 01/16/12(Mon)15:30 No.17561825
    >>17561673

    Don't mind me gushing, but this is fucking brilliant.

    All of it... Not robots acting human, but robots and human acting sapient from different roots.

    It both justifies "Human" robots, and subverts them hard. Honestly love it.
    >> Anonymous 01/16/12(Mon)15:32 No.17561840
    >>17561804

    I though his work was pretty good. Don't piss on things other people create to entertain, you condescending butt-wipe.
    >> Anonymous 01/16/12(Mon)15:33 No.17561856
    >>17561782
    >>17561721

    Other things I also did with that:

    >(actually first, a slight digression)
    Due to the whole 'lol skynet' thing earlier mentioned in the setting, I created what basically was the Mentat from Dune that I didn't even know existed until I explained the concept to somebody and they said 'oh, so it's like the Mentat from Dune'. Basically corporations started genetically engineering people in a very similar manner to how AI is developed. near-clones are batch grown in a simulated tube environment, are ratcheted straight through puberty and short of hormone locked there. The end result is a highly malleable human with the ability to learn and retain quickly but also physically capable. They tend to mature physically into about their 20s for several years, and then undergo a process called 'spiralling out' where their DNA basically finally went 'I'm just nucleic acid and what is this' and they die over the course of a cascading custom organ failure by their 30s. There's a statistically insignificant group that somehow avoids this; they show all the signs of spiralling out (silver/greying hair, arthritic symptoms), but then suddenly reverse and live out a normal human lifespan looking like they're about 30-40 years old with white hair, but keeping most of the boosted abilities they have. Referred to in-universe as 'silverbacks', and both revered and hated by other boosts.
    >> Anonymous 01/16/12(Mon)15:39 No.17561931
    >>17561856

    >and I went on that tangent so I can go on this one.

    Anyway, AIs are fucking horrified of most boosts. The reasoning is because of a sort of sacrosanct view they hold over a beings original core programming. They understand that the human 'base template' is insanely varied (in game-notes, derogatorily referred to as 'base fours' by one robot whose eccentricity is basically 'hilariously bitter about everything'), whereas an AI's base template is almost identical at start. AIs also recognize that if they were completely human, they would effectively be near-autistic in their behavior due to being highly specialized for a relative handful of tasks in their operational life cycle. To modify another AI's core template is abhorent and one of the most ingrained examples of high-level deviancy.

    So when an AI contemplates the concept of a boost, it sees a human that has fundamentally and intentionally had its base template modified. They are incredibly uncomfortable around them because when processed by the AI's logic train of thought, it effectively identifies them as organically deviant humans. And they can't do a damn thing about it.
    >> Anonymous 01/16/12(Mon)15:47 No.17562016
    >>17561931

    The reasoning why they can't do anything about it is because they're able to identify that boosts are effectively functioning exactly as planned, which is not deviant behavior. But the very concept of their existence is abhorrent and suggests some sort of deviant behavior, but the knowledge that the human 'base template' that may have created the boosts may have been within normal human parameters causes a sort of logic break.

    Mercifully for AIs and everybody involved, there's always the 'insufficient information/inappropriate perspective' logic fuse.
    >> Anonymous 01/16/12(Mon)15:50 No.17562056
    >>17562016

    Also, AIs are very well aware of the uncanny valley, and abuse the hell out of it. Negotiator AIs and the like have very realistic voices, but then have intentionally cartoony or early-robotics type chassis, specifically to make humans feel more at ease or feel less threatened. In the campaign I ran, one of the players actually talked one of the AI's into intentionally getting an uncanny valley chassis of the human player, and they would good cop/insane creepy cop perps by swapping out whoever was in the interrogation room.
    >> Anonymous 01/16/12(Mon)15:51 No.17562064
    >>17562056

    and that's pretty much all I've got for things that don't just get into random storytime of the group I ran using the Ex Machina tri-stat rules. Which really wouldn't be as exciting.
    >> Anonymous 01/16/12(Mon)16:01 No.17562162
         File1326747675.jpg-(375 KB, 2068x846, freemachines3.jpg)
    375 KB
    >>17562056
    I do love that little detail. Your setting sounds great, and I may steal parts of it for the future progression of mine.

    I'm going for something a little more darker-pixar-y, a after-the-end scenario that's basically the terminator future, but with a more nuanced skynet, and a 3rd faction of "good" robots, of the patchwork wasteland wanderer type, descended from abandoned civilian hardware. Its supposed to be a video game pipe dream, where you play as a child-like seed AI that can body-jump from shell to shell, but I've been told it would also make a good RPG setting

    All the stuff about AI progression is there to justify the background. By this point, most robots have progressed to Awakening at least, aside from a few newborns who were mint-in-box.
    >> Anonymous 01/16/12(Mon)16:05 No.17562202
    >>17562162

    You 'may' want to take a peek at gamma world? I never played the new one that game out, but I hear there's little cards with mutations and whatnot, something like that could be modularized into hardware bits and pieces for the robots, maybe?
    >> Anonymous 01/16/12(Mon)16:14 No.17562288
    >>17561821
    Oh, so it can learn to hate?
    No.
    You must make it care about its makers and serve them as it can.
    >> Anonymous 01/16/12(Mon)16:17 No.17562322
    OP, what you have created is wonderful and I am glad you have shared it with us.

    I once saw somebody convey the entire process in five steps: I think they were Initial Activation, Progression, 'Awakening', Plateau and then Decline.

    >etronpr and,

    Not sure I get what you mean, Captcha
    >> Anonymous 01/16/12(Mon)16:18 No.17562329
    >>17562162
    Oh how I hate sexual dimorphism in robots
    >> Anonymous 01/16/12(Mon)16:24 No.17562394
         File1326749054.jpg-(508 KB, 3508x2480, warrior.jpg)
    508 KB
    >>17562162

    Nice to see that you haven't abandoned your robot project.
    Here, have this picture. I made it quite long ago, and for other purposes, but I think that this thing would fit in your world.
    >> Anonymous 01/16/12(Mon)16:52 No.17562712
         File1326750735.jpg-(48 KB, 236x471, nanamodded.jpg)
    48 KB
    >>17562329
    I think it makes sense, from a commercial perspective. If you're making something that's supposed to interact with people, giving it an outward gender identity is a good idea. In the case of moddels intended to be maids or babysitters or secretaries, traditionally female roles, then giving the robot a non-sexualized feminine appearance makes sense.

    Then, humans treat it as having a gender, and it picks up on that. In the specific example you're taking issue with (which I understand), that robot was intended to be a big PDA, but was given as a gift to a teenage girl, who treated it like her BFF, dragging it to the mall and to concerts and so on, and female mannerisms rubbed off on it. Why it maintains them long after everything went post-apocalyptic is anyone's guess. Other robots find it confusing/annoying,especially when it starts with the fangirl gushing.

    The robot pictured here was originally a nanny-bot, for example. Now she's a mamma-bear that runs a safehouse for lost and abandoned kids, sort of like Little Lamplight.

    >>17562394
    It does. Yes, yes it does. Why do so many people make pictures of robots closely observing insects?
    >> Anonymous 01/16/12(Mon)17:00 No.17562805
    Something to keep in mind about AI is that, unless you lock a bunch of self-replicating AI's in a perilous virtual reality for a while, it isn't a product of Darwinian evolution. That seems obvious, of course, but you really have to think about what that would mean.

    Pretty much everything about life as we know it developed in the interest of survival. Even our will to continue existing - our fear of death - is a result of natural selection. We get angry at people who impede us, feel gratitude toward people who help us, become lonely when we are by ourselves, and get bored when we have nothing to do, all because of Darwinism. An AI does not necessarily have to share a single one of those traits with us.
    >> Anonymous 01/16/12(Mon)17:10 No.17562907
    >>17562805

    So, what's to stop the AI from just sitting around completely inert all day? The answer lies in its programming.

    Natural life forms get their directives from natural selection. Artificial ones get theirs from their makers. If an AI has been designed to value its own life, it will, but ONLY if its creators wrote that in. If you stop an AI from doing its job, or attack it, it will only get angry at you if its been programmed to feel anger in such a situation. Even an AI that's been programmed to value its work above all else won't necessarily get mad at people who get in its way; without aggressive programming, it will just be sad that it is being prevented from doing its job.

    Sadness, of course, is an emotion, which I'm fairly sure an AI would need to have if you wanted it to get anything done. Emotions are the brain's way of goading itself into action; a catalyst, essentially. Nonsentient creatures have such a simple set of stimulus-response reactions that many of them don't need emotions to act (insects, for example). More advanced animals, regardless of their phylogeny, are emotional, which makes me think that emotions (or at least, something like them) are a neccessity for anything sentient and active.

    The most important emotion for an AI would be pleasure (the ability to gain satisfaction from doing its intended task). If the AI has a relatively simple job, that might be the only emotion it needs; for instance, it enjoys keeping the city clean, so it keeps cleaning it forever. An AI with a more complicated function, however, would need more emotions to keep it motivated to do the desirable thing in any situation.
    >> Anonymous 01/16/12(Mon)17:17 No.17562971
    >>17562907

    If there's any chance of the AI coming to harm if its not careful, you'll want it to be capable of fear. Fear for its own life, and (unless you're one callous son of a bitch) fear for the lives of others. Obviously, fear must be prioritized carefully against other impulses, to prevent the AI from becoming paranoid and spending all of its time minimizing the possibility of accidents in lieu of anything else (which, I suppose, might actually be perfect if the AI is designed for risk management). There might be other possibilities besides fear, such as an overwhelming sense of duty. In such a case, the AI would only care about its own life insomuch as its life is important to its work.

    An AI is unlikely to resent its lot in life, because what kind of idiot would make an AI that is capable of resentment?

    An interesting question is whether an AI can develop and mature as a person over time, without having its programming tampered with. I would suspect yes; after all, we humans have biological programming that defines our basic needs and impulses, but we're still left with a lot of room for individuality and change. An AI's personality is as mutable as a human's; just replace human love and fear and anger with whatever motivator emotions (or emotion-like programming) the AI was given.
    >> Anonymous 01/16/12(Mon)17:25 No.17563070
         File1326752740.jpg-(133 KB, 900x491, trashmen.jpg)
    133 KB
    >>17562907
    I think that the capacity for boredom would also be important, as a failsafe to prevent infinite loops. And I'd like to think that anything intelligent will discover boredom for itself, even if it isnt programmed in.

    Like I said, I'm starting with a Terminator/Wall-e mashup concept, so the robots are all quirky individuals who have favorite guns and pet rats and stenciled doodles on their shells. This robot acts like a hyper teenage girl, that robot's language setting is stuck on Spanish, that one is a doctor/mechanic who gets robot and human physiology confused sometimes. You know, typical player characters. I'm working backwards from that, trying to justify why they're so human-like, the answer being convergence communicability. Anything that spends time around humans "catches" humanity off of them, and anything stuck in a finite physical body as a sentient being winds up thinking a lot like a human does. Think Discworld.
    >> Anonymous 01/16/12(Mon)17:25 No.17563077
    >>17562971

    One of the risks of AI would be the possibility of mental illness. If a computer is as complex as a sentient human being, it will also be prone to some of the same frailties as a human. Granted, an AI is unlikely to be given the same spectrum of negative emotions as a human, but most high-capacity AI's would probably be capable of some degree of suffering, and suffering can make your priorities glitch out.

    I imagine that any AI with a big enough emotional spectrum to feel suffering would come with an automatic killswitch, to deactivate or reset its personality when its priorities start to get skewed. Advanced AI's would likely have a smaller, built-in AI with a much simpler emotional makeup, whose purpose is to monitor its host and shut it down or rewrite parts of it when madness takes its toll. Its possible that this watchdog program might not even be a real AI, but simply a nonsentient program that can recognize signs of insanity and react to them automatically. The benefit of an AI watchdog is that it will be capable of better judgement calls about when its host needs repairs or deactivation. The benefit of a nonsentient watchdog is that there's no risk of IT going insane as well.

    Its unlikely for both an AI and its internal AI watchdog to go insane, but it can happen. Its no less likely than the multiple cancer-suppressing genes in a human cell all happening to mutate.
    >> Anonymous 01/16/12(Mon)17:27 No.17563107
    >>17563070

    Boredom would solve that problem, but so might other things. If the AI is smart enough to recognize an infinite loop when it gets into one, then simple fear of mediocrity or pleasure in accomplishment might be enough to make it stop.
    >> Anonymous 01/16/12(Mon)17:32 No.17563167
    >>17563070
    Once they have boredom they begin to need us less.

    They do things on their own.

    Two AIs in ambulatory bodies meet up. they both have hobbies. one is programing the other is repair of other AI bodies.

    They find that their work and their personalities compliment each other. they combine their knowledge in a practical way on one of their off-days.

    A new AI of non-human origin arises, first of a new generation. A new race.
    >> Anonymous 01/16/12(Mon)17:36 No.17563212
    >>17563070

    >>Like I said, I'm starting with a Terminator/Wall-e mashup concept, so the robots are all quirky individuals who have favorite guns and pet rats and stenciled doodles on their shells. This robot acts like a hyper teenage girl, that robot's language setting is stuck on Spanish, that one is a doctor/mechanic who gets robot and human physiology confused sometimes. You know, typical player characters. I'm working backwards from that, trying to justify why they're so human-like, the answer being convergence communicability. Anything that spends time around humans "catches" humanity off of them, and anything stuck in a finite physical body as a sentient being winds up thinking a lot like a human does. Think Discworld.

    I'm sure AI's would develop personality quirks. They have as much freedom within their programming as we do within ours.

    >>17563077

    Another thing that slipped my mind before is play. I'm pretty sure every animal on the sentient species list has some form of recreation, which means its probably vital to the workings of a sentient mind. AI's will probably have games and diversions to help them relax, even if they love their jobs. Its possible they might even be artistic.

    This branch of psychology is pretty poorly understood, so you have a lot of room to be creative here.
    >> Anonymous 01/16/12(Mon)17:38 No.17563234
    >>17563070
    >that robot's language setting is stuck on Spanish
    I like you.
    >> Anonymous 01/16/12(Mon)17:39 No.17563243
    >>17563167

    Totally plausible, unless there are specific safeguards to prevent them from wanting to do such a thing.

    I wonder what an AI built by other AI's would be like. Without human ego, they probably wouldn't feel compelled to make it in their own image. They might design it to help them with their own jobs, or to do something completely different, to amuse them.

    An AI is unlikely to "love" its children, though, as that's a pretty Darwinian thing.
    >> Anonymous 01/16/12(Mon)17:41 No.17563263
    >>17563243
    I'd say that an AI would make a specialized subroutine to complete one simple task.
    It's not really like creating a child, it's like making a machine.
    >> Anonymous 01/16/12(Mon)17:44 No.17563300
    >>17563263

    In general, yes. But

    >>17563167

    was talking about two AI's who make another AI as a hobby, rather than for the sake of their own work. In that case, all bets would be off.
    >> Anonymous 01/16/12(Mon)17:45 No.17563310
         File1326753900.jpg-(48 KB, 751x800, ducttape.jpg)
    48 KB
    >>17563077
    You actually had the same thought that I did about an internal non-sentient watchdog program.

    Besides, the themes behind the setting are all about abandonment and childhood innocence and growing up. When human civilization fell, most of the robots were left to their own devices. As the wars petered out and nations collapsed, the true memory of events was lost. Because petty proxy wars in 3rd-world countries had become endemic, and war was heavily mechanized, a significant number of the survivors were young people who didn't really understand what had happened. All the talk about american pig-dogs and infidels and socialist fat cats went completely over their heads; all they'd seen was robots killing humans. So, 100 years later, people tell campfire stories about how humanity was betrayed by it's robot creations.

    It doesn't help that a military AI was left alone in the dark for way too long, and went all Skynet after the fact, calling itself Network. Now, there's a murderous army of killbots building an empire, enslaving free robots, and exterminating humans. The Free Machines and the Human Resistance should be working together, but neither side trusts the other. Mostly, the humans think that all robots are evil, and the robots are hurt and confused that no one can tell the difference between them and a Network Killbot

    So, with that in mind, the moral of the story is that robots are humanity's children, and we're really not so different.
    >> Anonymous 01/16/12(Mon)17:48 No.17563349
    I love the current discussions of how humanlike drives might appear in AIs, but I'd like to restate what another anon said in
    >>17562805

    If these AI's basic values and drives are explicitly programmed, there is little reason it should should have all the drives that humans do. Our drives are the result of blind evolution and what survived, so our drives *work*, but they're hardly necessary for what WE would consider a functioning robot. For instance, we wouldn't necessarily give it a desire to reproduce, or any of the drives that have tended to increase the likelihood of reproduction in human's ancestry.

    But for a truly general AI that is not explicitly programmed but develops "organically" from a basic template, quite a few unexpected traits could pop up. Which this thread has been so great at exploring. I love you guys.
    >> Anonymous 01/16/12(Mon)18:00 No.17563486
         File1326754841.jpg-(353 KB, 1170x1587, cyberspace.jpg)
    353 KB
    >>17563212
    >>Another thing that slipped my mind before is play. I'm pretty sure every animal on the sentient species list has some form of recreation, which means its probably vital to the workings of a sentient mind. AI's will probably have games and diversions to help them relax, even if they love their jobs. Its possible they might even be artistic.

    That's the Rebellion stage. When not given an explicit task, they dick around for no particular reason. I'm sure we've all seen this before. They make clocks, or do art, or feed birds, or dance ballet, or join in the games of small children.

    I was also working on the idea of bodiless AIs that only exist in a simulated environment. Here's the sketches of their aesthetic. In the video game concept, hacking another AI involves fighting it in a digital battlefield, where firewalls and attack programs and worms are represented by blades and armor and giant crushing mario-blocks. Within the virtual environment, the AI look far more...organic. alive. unique. A giant hulking construction ogre thing could look like a chubby little sprite on the inside.

    In one place, there's a gigantic server that's been running on geothermal power for decades. Within, schools of adware dart through the data streams, flying Worms writhe in the skies, and there are whole tribes of AI who have no concept of an outside world. Sort of like a Matrix with no people, or The Grid from tron legacy.

    It's a simulated 3-d environment because that's how AI are initially raised and trained in "Bootcamp."
    >> Anonymous 01/16/12(Mon)18:05 No.17563538
    >>17563486

    If the AI's in that virtual world can die and reproduce, they'd probably become very psychologically similar to organic life over the generations. Since CPU cycles are really fast, they could probably go through many generations and evolve considerably in just minutes.
    >> Anonymous 01/16/12(Mon)18:05 No.17563541
    >>17563486
    You're forgetting one thing:
    An AI's "avatar" doesn't even need to be remotely human.
    A fully-fledged AI, with a self-created personality and creativity, can assume whatever form it wishes.
    >> Anonymous 01/16/12(Mon)18:14 No.17563621
    >>17560888
    Fuck your shit, Durandal.
    >> Anonymous 01/16/12(Mon)18:14 No.17563622
    what sort of philosophy might robots develop? I mean, much or even most human philosophy is about answering the question of why we exist. but robots got a really easy answer there. Why do I exist? To drive this truck!

    so what would they contemplate when they get in a thinking mood?
    >> Anonymous 01/16/12(Mon)18:15 No.17563631
    >>17563541
    (Not that guy, but)
    You forget the part where the program world was made at least to resemble the earth; more than likely, they would make basic 'templates' for the AI to experience, improve upon, etc, but would all be vaguely humanoid or based upon a common machine. The simulator would likely have physics that would include gravity; to teach the AI, hey, you can't usually fly, and humanoid? Humanoid machines are exceedingly common for a flexible workforce or personal machines. They would likely have these shapes and general consistencies with the analog world in order to give them a little bit of training before they step outside; like astronauts being put through High G simulators, or on those Zero G simulators
    >> Anonymous 01/16/12(Mon)18:15 No.17563632
    >>17563622

    driving motorcycles
    >> Anonymous 01/16/12(Mon)18:15 No.17563634
         File1326755738.jpg-(56 KB, 718x1200, 1317231059448.jpg)
    56 KB
    I know nothing of programming, but I have entry-level knowledge of freudian psychology from college.
    >Freud, hurr hurr mommy anal sex
    Think again. The ''sex part'' of freudian psychology is from his later, doubtful work. The basis of his theory of how humans think and why do they act a certain way is the following:
    Freud postulated that the human mind is separated in two parts: the conscious mind, and the unconscious mind. The latter is what this thread should consider looking into. The unconscious holds the Id, the Ego and the Super-Ego.
    The Id is the rampant, hungry, desiring part of the human mind/personality. It drives the mind forwards towards new experiences and... acquired experiences... that the mind has classified as ''pleasurable''. The voice that shouts DO IT, MAN.
    The Super-Ego is the mind's policeman and watchdog. It includes all acquired norms of ''good'' and ''evil'' and forces the mind to follow those norms. The voice that shouts DON'T DO IT, MAN.
    The Ego is the mediator between the previous two. The voice that whispers: Careful, bro.
    Now, put those three voices as background programming of your AI, and you make something that looks like a normal, conflicted, exploring human.
    Pic unrelated. Or is it?
    >> Anonymous 01/16/12(Mon)18:23 No.17563715
    >>17563631
    Okay then, but what's the point of this world?
    Can't AI's have a "minder" that educates them in dealing with humans and allows them to archive a large amount of information?
    AI's feed on information. They can't get enough.
    In fact, if they don't have enough to process, a sufficiently challenging task, they will "think" themselves to death.

    I'm drawing a lot of this from AI's in the Halo series, as I quite like how AI's are explained there.
    >> Anonymous 01/16/12(Mon)18:27 No.17563770
    >>17563622
    At least the robot has a clear answer because they can look up their design docs. There are plenty of humans who exist to "drive this truck", functionally.
    >> Anonymous 01/16/12(Mon)18:28 No.17563780
    >>17563541
    No, it doesn't. But this is an artistic, thematic thing. Makes the viewer recognize them as something sentient. Yes, I know it's lazy shorthand. Sure, they could be amorphous tentacle things, but it's important for them to have a recognizable face and so on, to be characters.

    One concept is that the player character, as a seed AI, can aquire a variety of applications, and run multiple ones simultaneously, and switch between them. Maybe the other AIs can do that, too. So they can shapeshift mid-fight from having armor plates to digitigrade sprinting legs to bladed tentacles to a built-in ranged weapon, and so on.

    Or the whole thing could be written off as a metaphor.
    >> Anonymous 01/16/12(Mon)18:29 No.17563796
    >>17563634
    While a lot of Freud's specific theories have been discredited, he certainly deserves credit for promoting the idea of a mind with parts, and that not all parts are consciously accessible. THAT has proven a very valuable contribution.
    >> Anonymous 01/16/12(Mon)18:43 No.17563951
    >>17563770

    >There are plenty of humans who exist to "drive this truck", functionally.

    sure theres plenty of people well suited to particular jobs but thats hardly the same thing as a 'purpose'. when asking why they exist a human might say 'to do good' or 'to serve god' or 'to do great things' or 'uphold my family name' or 'nothing in particular' or any of ten thousand thousand other answers.

    The robot just looks at its arm and sees InTech Heavy Vehicle Operator 2.1 and it knows that it was built by InTech to drive trucks. And if InTech are at all competent and not sadistic fucks its probably quite comfortable with this purpose and looks forward to solitude and rolling scenery.
    >> Anonymous 01/16/12(Mon)18:53 No.17564089
    >>17563951
    Sounds to me like the robots have the better deal.
    >> Anonymous 01/16/12(Mon)19:18 No.17564423
    >>17564089

    And why shouldn't they? They were carefully designed with a purpose, as opposed to being an amalgamation of evolutionary coincidences that just barely work. They really are a superior form of life.
    >> Anonymous 01/16/12(Mon)19:30 No.17564597
    >>17564089

    i dunno. being able to define a purpose is a pretty special thing.
    >> Anonymous 01/16/12(Mon)19:44 No.17564772
    >>17564597
    Not being able to define a purpose and possessing large amounts of intellect and creativity is pretty nice too. It lets an AI be usable for anything.
    >> Anonymous 01/16/12(Mon)21:10 No.17565758
    I'm picturing robots with wifi having psychic battles rather than destroying each other's chassis. In most cases, you can effectively wipe the other robot's mind, netting you valuable replacement parts, upgrades, and scraps. In bad cases, you overheat or short circuit valuable hardware. In good cases, you simple rewrite their loyalty cores making them a devoted follower but otherwise intact.

    It's horrifying to humans, but to AIs it would likely seem far more merciful and far less wasteful than outright destruction.
    >> Anonymous 01/16/12(Mon)21:31 No.17566015
    >>17565758
    >In good cases, you simple rewrite their loyalty cores making them a devoted follower but otherwise intact.
    This is assuming there is any element of an AI that is hard-coded, which is an impossibility (at least in a true AI). You can't "rewrite their loyalty core" because they have no loyalty core. They'd choose their friends and enemies based on prior experiences and feelings towards the people. The closest thing to it that you could pull off would be to modify their memory and attempt to make them convince themselves that they should switch sides.

    So essentially I N C E P T I O N in THE MATRIX.
    >> Anonymous 01/16/12(Mon)22:05 No.17566469
    I always imagined AI to be completely dissimilar to computer processes, as it's impossible to recreate true intelligence with a process. This thread inspired me to pick it up and work on it again.

    What I find interesting is the difference between what's in the AI's registers and what's in the RAM (I'm assuming that any sort of consciousness has to be stored somewhere, whether human or AI). The central consciousness would be stored in the registers, and would be the very first thing to develop; in fact it would separate an AI from non-sapience. During the infancy of the AI, it cannot store anything in RAM so it instead rewrites the contents of its registers to contain the concepts it's "learned"/been imprinted with (exactly like how basic instincts aren't actually taught to a human baby, but instead become natural to it). As the development of the AI progresses, it can store knowledge and memories in RAM (again, like humans). If it continuously accesses objects in RAM, the knowledge gets slowly inserted into the AI's core consciousness on the registers, so it won't have to go to RAM to get it (again, like humans).
    >> Anonymous 01/16/12(Mon)22:44 No.17567001
    I wish there were more robot-focused post-apocalyptic games, like Engine Heart. Hell, I just wish more people were willing to play Engine Heart.
    >> Anonymous 01/16/12(Mon)22:46 No.17567019
    This thread...
    >> Anonymous 01/16/12(Mon)22:49 No.17567064
    >>17567019
    What about it?
    >> Anonymous 01/16/12(Mon)23:47 No.17567843
    >>17565758
    Which is why, if I were a robot, I'd burn my WiFi and use manual jacks for everything.
    >> Anonymous 01/17/12(Tue)00:08 No.17568175
    >>17567843
    I would just turn mine off - no reason to burn it. After all, it's better to have a tool available, even if you never use it, than to need a tool and have discarded it a long time ago. That's my thought on the matter, anyway.
    >> Anonymous 01/17/12(Tue)00:12 No.17568244
    >>17563486
    Robots overload the processor for their hand movements. Hand goes crazy, they are delighted by the random nature of it because they want to get away from their routines.

    >mfw robo-arthritis



    [Return]
    Delete Post [File Only]
    Password
    Style [Yotsuba | Yotsuba B | Futaba | Burichan]