After humanity

Discussion in 'General Discussion' started by Arthgon, Jun 14, 2010.

Remove all ads!
Support Terra-Arcanum:

GOG.com

PayPal - The safer, easier way to pay online!
  1. Dark Elf

    Dark Elf Administrator Staff Member

    Messages:
    10,796
    Media:
    34
    Likes Received:
    164
    Joined:
    Feb 6, 2002
    It wasn't long ago that a 75 GB hard drive was state of the art. Wouldn't be too surprised if the average game requires that kind of space a few years down the road.
     
  2. Archmage Orintil

    Archmage Orintil New Member

    Messages:
    586
    Likes Received:
    0
    Joined:
    Sep 18, 2007
    Best 'estimates' I've found have been between 1 and 1,000 Tb...rather large margin really. Storage capacity is already achievable, either by a single harddrive or a massive bank of them (depending on where at in the margin the truth resides). It's the ability to process data that is severely lacking.

    It took the worlds largest network of super computers to functionally duplicate a single neuron. Current processing speeds are already faster than a neuron on a magnitude of thousands, but the average human brain contains 100 billion neurons, each with thousands of connections, creating a parallel processing capacity that is currently impossible to match with conventional technology such as serial processors, and will be, if we stick with conventional technology, for the foreseeable future (there really is a limit to how small chips can be made and remain stable, regardless of technological level and as such there's a limit to how far such technology can be advanced. After that I think we'd need to go into quantum computing).

    There was an experiment done not long ago using a piece of rat brain to wirelessly control a small robotic platform. The interesting part of it, if I recall correctly, was that there was no software needed (beyond converting signals anyway). The platform had basic sensors and typical wheeled method of motion. The sliver of rat brain, after some time and coaxing with 'pleasure' (apparently rat brains find an increased zap to be pleasurable, but whatever, some people enjoy having car batteries hooked up to their nuts), figured everything out it needed to move the robot around a simple obstacle course.

    If we can develop a reliable method of keeping neurons alive in such an arguably harsh environment, a sort of bio-mechanical hybrid chip might one day soon replace the conventional technology mentioned above. I can see this providing leaps in the A.I. field with processors not measured in Hz, but by how many neuronal connections it has and an entire industry built around developing methods of packing as many neurons into as small a space as possible and getting them to grow as many connections as possible, overshadowing the current industrial race on packing as many transistors into as small a space as possible.
    It's pretty amazing how many technological advancements are made due to a geek wanting to turn his or her favorite sci-fi into reality. A friend of mine is developing a wrist mounted mid-air display because he wants to duplicate Galens (a babylon 5 character) ability to form holographic representations of stuff in his hand. No need to buy a stupidly expensive heliodisplay either. You just need some garden equipment and a picoprojector.
     
  3. Arthgon

    Arthgon Well-Known Member

    Messages:
    2,737
    Likes Received:
    12
    Joined:
    Dec 30, 2007
    It's getting more amazing if they are finished working on theReplicators
     
  4. Jazintha Piper

    Jazintha Piper Member

    Messages:
    575
    Likes Received:
    2
    Joined:
    Jun 12, 2007
    According to a Star Trek episode I saw recently, until we managed to get AI to draw conclusions that the programming could not have possibly forseen, we should be right.
     
  5. Grakelin

    Grakelin New Member

    Messages:
    2,128
    Likes Received:
    0
    Joined:
    Aug 2, 2007
    Hey, Orintil, ask Santa for some paragraphs for Christmas, would ya?
     
  6. Charonte

    Charonte Member

    Messages:
    899
    Likes Received:
    0
    Joined:
    Jul 19, 2009
    With regards to storage size, keep in mind that I was refering to traditional machines; no matter how large a databank you set up, there will always be hardware limitations and you will always hit them. It might not even storage directly, it could be an integer limit on the platform, it could be harddrive performance, it could be memory (bus) limitations etc. There's only so much you can do with a boolean system, you know. I don't know enough about other systems to pretend to suggest restrictions there, however.

    Muro, how else do you suggest they create AI if/when it happens? Heck, even current bleeding edge tech works via emulating an organic brain in the form of a neural network. Sentient thought may be the goal of AI but odds are "emotion" or atleast something similar will be a biproduct of producing it.
     
  7. Muro

    Muro Well-Known Member

    Messages:
    4,184
    Likes Received:
    22
    Joined:
    May 22, 2007
    My vision of AI is based on what I perceive as the definition of intelligence. The way I see it, AI - assuming its body would be electronic rather than organic - will work how a human brain would work if it was free of the influence of hormones and had the processing power of a supercomputer. That's what I'd call the perfect AI.

    If the concept would work, the result would be pure artificial intelligence - a system able to understand and analyse the data it possesses, draw logical conclusion out of it and make the most rational decisions, no more, no less. It would be cold in its rationality, lacking emotions and feelings which are simply a side effect of how our organic body works and how its electronic body wouldn't. The machine would be of course able to distinguish positive and negative, but wouldn't be actually happy or sad about it - it would be just an information for it.

    Now, if the only possibility to create AI would be to make the machine's physical structure more and more similar to our brain, this may look a whole lot different. If I'm not mistaken, even a brain which would theoretically be extracted from a body and placed in a machine which would keep it alive and functioning would still feel emotions since it's impossible to eliminate the influence of the hormones altogether, seeing how some of neurotransmitters are hormones themselves.

    If the devise storing the AI would be close enough to an organic brain, perhaps it will need its own equivalents of neurotransmitters, making emotions and feeling inevitable. The result would be an Artificial Mind (where by "mind" I understand intelligence + emotions & feelings) - a creation not as perfect and efficient in terms of intelligence as pure AI, but perhaps the only creation containing AI possible to construct.

    Or perhaps not. How should I or any of us know. If I would be sure which option of building an AI would actually work, I would have created one already.

    Or maybe I wouldn't, since I could very well think AI should never be created because of JESUS MURO STOP WRITING ALREADY.
     
  8. Grakelin

    Grakelin New Member

    Messages:
    2,128
    Likes Received:
    0
    Joined:
    Aug 2, 2007
    The answer is simple: Muro is an advanced AI, and he doesn't want anybody making any competition for it..
     
  9. Archmage Orintil

    Archmage Orintil New Member

    Messages:
    586
    Likes Received:
    0
    Joined:
    Sep 18, 2007
    Not until that sob takes back the pocket change gift card I got and gives me the Sword of Omens I asked for when I was 6.

    Muro, our emotional brain helps us gauge the 'importance' of a decision and whether or not it's 'good' or 'bad'. Without that, or some alternative method to achieve a similar ability to view things subjectively, an AI relying strictly on logic would be able to make conclusions based on information, but could find itself incapable of making choices when faced with multiple, logically equal, conclusions.
     
  10. Mesteut

    Mesteut New Member

    Messages:
    686
    Likes Received:
    0
    Joined:
    Jun 28, 2009
    It may replace emotions with a utilitarian approach though - generating the maximum happiness with the minimum unhappiness.
     
  11. Muro

    Muro Well-Known Member

    Messages:
    4,184
    Likes Received:
    22
    Joined:
    May 22, 2007
    "Happiness" and "unhappiness" may not be the best words, seeing how they are feeling and all. "Benefits" and "harms" would work fine, though. And yes, I would expect it to be as utilitarian as possible.
     
  12. Mesteut

    Mesteut New Member

    Messages:
    686
    Likes Received:
    0
    Joined:
    Jun 28, 2009
    Well, let's pray that it doesn't conclude that humanity is bound to harm itself, so the least harm will be done by removing us out of the equation.

    Assuming it is built, of course.
     
  13. Muro

    Muro Well-Known Member

    Messages:
    4,184
    Likes Received:
    22
    Joined:
    May 22, 2007
    If if it wasn't for the fact that I am myself a human and have personal interest in humanity not being destroyed all of a sudden, I can't say that eventually I wouldn't agree and support such a decision.

    I somehow skipped this earlier, even though I recall the part about the Sword of Omens. You edited it, didn't you? Didn't you!?

    Regarding emotions helping us decide what is right and wrong - do they really? I'd say opinions and our morality help us gauge the importance of decisions, while emotions only make us make hasty and potentially wrong decisions. I myself am always more satisfied with decisions when I make them after all of the emotions accompanying the decision fade away. The more of them fade away and the more objective the decision I make, the lesser the chance that I'll eventually regret it and think that if it weren't for emotions, I would have made the right decision in the first place.
     
  14. Charonte

    Charonte Member

    Messages:
    899
    Likes Received:
    0
    Joined:
    Jul 19, 2009
    Morality is an extension of emotion, we do or don't do something because we 'feel' it's wrong or otherwise; these 'feelings' are taught to us by society from the moment we're brought into it - they're nothing more than a set of justifiable behaviours passed on from generation to generation. They're justfiable in the sense that civilisation as a hole can understand and even appreciate the actions you took for whatever reason, it's all about pride and image. In that sense the ends do not justify the means within our current culture.

    When you're talking about a code of ethics or morality, think of it less as right and wrong and more of a way to subjectivly make a decision when logical equality occurs. Emotions can be applied in the exact same way and infact both of these quite often overide pure 'logic'.
     
  15. Archmage Orintil

    Archmage Orintil New Member

    Messages:
    586
    Likes Received:
    0
    Joined:
    Sep 18, 2007
    I said good or bad, which can be what we find right or wrong, but isn't limited to morality. It can apply to things in which morality is irrelevant. As an example: You're given a choice between two equally nutritional foods. They are exactly the same in every logical way. You recieve the exact same amount of energy, they take the same amount of time to consume, they generate the same amount of waste, etc. How do you choose which one you'll eat? Most likely by whichever one you enjoy the taste of more, or the one that's aesthetically pleasing, or the one that smells good. Using strictly logical thought, it's impossible to make a choice in this regard as there's no quantitative aspect that can be calculated. You could of course randomly accept one, but that's not really making a choice.
    Yes, I did do an edit.
     
  16. Arthgon

    Arthgon Well-Known Member

    Messages:
    2,737
    Likes Received:
    12
    Joined:
    Dec 30, 2007
    True, but it also depends in which culture you are born and live. That's because, for what is acceptable for one culture's code of ethics and morality, a different culture will think and see it as amorality, thus unacceptable and very wrong.

    Also, even if the world comes to the end, or humanity are gone from Earth (be it colonizing other planets, or destroyed by ourselves), there could be a chance that everything starts over again.

    PS: Are lightsabers or teleporters feasible?
     
  17. Zanza

    Zanza Well-Known Member

    Messages:
    3,296
    Likes Received:
    61
    Joined:
    Apr 20, 2009
    Lightsabers, no. At least in the way you see if in starwars. Unless you have something to cap the laser to a specific length you get this.

    The Spyder III is illegal in Australia as it too powerful a laser, it is 50 times stronger than the lasers we use to point out moons.
     
  18. Grakelin

    Grakelin New Member

    Messages:
    2,128
    Likes Received:
    0
    Joined:
    Aug 2, 2007
    It used to be that the lightsaber was a filament wire that radiated the laser around it, but they retconned it.
     
  19. Muro

    Muro Well-Known Member

    Messages:
    4,184
    Likes Received:
    22
    Joined:
    May 22, 2007
    Still, such a strategy of making choices would work. We ourselves at times make decisions based on, say, a flip of a coin when we really consider the possibilities equal and/or don't really care. A machine would have a possibility of estimating the resultant value of an decision with much greater precision than a human (for example, one of the fruits would takes exactly 0,002 seconds less to consume, making the choice obvious) so equally valuable choices would be quite rare. And when they would actually happen to be, if ever, the machined could flip its internal coin. As a decision making machine it could work like that, I'd say.

    I wanted to disagree with you, explaining how there can be a entirely objective morality, but - to my surprise - the more I thought about it, the more I realised that is not so, seeing how in every morality there is at least one point we take for granted - one thing we consider the most important for granted (be it life, biological diversity, the existence of the multiverse) and morality sprouts from that very point, stating how that very thing should be protected at all cost and so on. If we state that nothing can be taken for granted, which would be a very intelligent and enlightened thing to say, then... fuck. No objective morality. No such thing as right & wrong, objectively speaking. I'm quite confused right now and have quite some things to think about, but I thank you for starting that process of thinking.
     
  20. magikot

    magikot Well-Known Member

    Messages:
    1,688
    Likes Received:
    4
    Joined:
    Aug 29, 2003
    This, though incomplete, tries to dramatize Galt's Speech which is pretty much the manifesto of Objectivist morality. Though flawed, it's the closest I've found to a purely objective morality. And also, though flawed, is one of two ways that a robot would be programmed (we are still talking about that right? Haven't kept up on the thread); either entirely Objectivist or entirely Altruistic.
     
Our Host!