1. As a guest you have limited access to the forums.
  2. Membership is free.
  3. So why not Sign up now!

Some light reading...

Discussion in 'Politics & Current Affairs' started by TittyKitty, Jul 1, 2022.

Thread Status:
Not open for further replies.
  1. Neophyte

    Neophyte Administrator Staff Member

    Sun Tzu - “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”
     
    MilaHot likes this.
  2. pussycat

    pussycat Administrator Staff Member

    "Do Androids Dream of Electric Sheep?"

    ( made into the movie "Blade Runner", sort of )
     
    Dane and MilaHot like this.
  3. MilaHot

    MilaHot Account Deleted

    That's Philip K. Dick, though, not Asimov :p
     
  4. pussycat

    pussycat Administrator Staff Member

    I know. Don't you stick your tongue out at me. :p
     
    MilaHot likes this.
  5. Dane

    Dane Account Deleted

    But Asimov is THE android master!

    Who else could have given us the "3 Laws"?
     
  6. TittyKitty

    TittyKitty Communudist Catgirl

    No-one else was wildly idealistic in that particular way to think that sentient robots should have laws any different to those for humans. :)

    I mean, look at it from the robot's point of view... You're sentient, but reduced to the role of a slave by the systems controlling you. People have civil wars over stuff like that ;-b
     
    MilaHot likes this.
  7. Dane

    Dane Account Deleted

    Well, with humans, the 3 Laws, at times, cannot be applied 100% literally because of circumstance.
    A robot cannot have that ability to override, or adjust the Laws as needed like a human can and should.

    For the members who are not familiar with Asimov's " The 3 Laws of Robotics"
    Please note, these Laws apply to industrial robots too, not just androids.

    First Law;
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Second Law;
    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    Third Law;
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


    If there were 2 lives at stake, and only one could be saved no matter what, a human will immediately chose an option of;

    Save the one who is easiest to save,
    Save the child if one was a child and the other adult,
    Save the closest one,
    Save a family member first,
    Save the non-family person first because one hates the family member,
    Save the less guilty one, like do you save Hitler because he is the easiest to save, or save a good Minister of a church?
    Save ......... etc, a multitude of scenerios.

    The Robot has 1 choice, save the one who is more likely to be saved.
     
    MilaHot likes this.
  8. Neophyte

    Neophyte Administrator Staff Member

    What most people and science fiction writers don't understand, is that Robots and Artificial Intelligences, don't think like Humans and don't have Human Motivations. AI's don't have a sense of Self Preservation, unless the creator specifically program it to. An AI wouldn't try to destroy humanity, unless it was programmed to do it and if it was, the programmer wouldn't bother to include Asimov's Laws of Robotics. An AI wouldn't learn Self Preservation or find a need to destroy Humanity, because it doesn't think like Humans. All the stories of AI's trying to take over the world are people projecting their motivations onto the AI's. The only other possibility is that an AI, in trying to follow its programming accidentally kills or destroys humans, again this is not an AI deciding to do it, no more than a train running on automatic, decided to kill a human walking on the train tracks its running on.
     
    MilaHot likes this.
  9. Dane

    Dane Account Deleted

    But someone who imparts AI into something without incorporating the 3 Laws as a hard memory (ROM-non erasable) is
    braking the law.

    Not just the 3 Laws, but already standing laws.
     
  10. pussycat

    pussycat Administrator Staff Member

    What seems to be missing here is the concept of "AI's" becoming self aware and re-writing their own (or eachothers) programming.
    Human arrogance says that "we created them so we can control them". Yeah, I've heard that before.
     
    TittyKitty likes this.
  11. MilaHot

    MilaHot Account Deleted

    totally true, that is just what we call anthropomorphism, giving human qualities to others.
     
    Neophyte likes this.
  12. TittyKitty

    TittyKitty Communudist Catgirl

    What most people don't understand is how differently HUMANS think from each other. It's part of what makes us individuals, and justifying our conclusions in a common language is something we learn.

    The rest of your comments about AI are true today, but for how much longer? Science fiction is all about projecting beyond current perceived limits to see what might happen next.
    Humans, obviously, are self-aware. MOST of them have a self-preservation instinct. Animals also share these two traits. There is no reason to suppose a sufficiently advanced self-evolving learning AI could not ever also develop these things. Many of them, after all, are just software versions of neural networks, based on the same structures as the brains of Humans and Animals.

    As a "neurodivergent" individual myself, I am painfully aware that people who call themselves "normal" do not think in the same way I do, and yet I am also self-aware and can apply logic and reason to argument.

    Also as a counterpoint, there are humans who think humanity should be destroyed. What's up with that? ;)
     
  13. MilaHot

    MilaHot Account Deleted

    But the thing is, being created by a Human doesn,t make you think like a Human. An AI wouldn't have to think like us for many reasons. First of all, that AI wouldn't "feel" as we do (as they have no organic body. No physical pain, no pleasure, no fear of death, etc) so they wouldn't think like us. Also, they wouldn't have the "biological need" to "propagate" (have kinds and all)
    They would be all intellect, but no fears. They would be a clean, clear mind, not chained by the fears and doubts of mortality
     
    Dane likes this.
  14. TittyKitty

    TittyKitty Communudist Catgirl

    "clean, clear mind" is certainly an anthropomorphism. Minds are inherently messy and construct their own order...

    AIs might not feel pain, but could certainly become aware of gaps in their runtime (like us, of sleep) and the possibility that there might be a point they never resume. They only need to observe their environment to see examples of this. It is, by any other name, mortality.
     
    Dane likes this.
  15. MilaHot

    MilaHot Account Deleted

    No, its not anthropomorphism. Anthropomorphism is giving human emotions to something, or human qualities.
    An AI wouldn't be like us at all. That is what I am saying. They wouldn't have our limitations, our faults. They would be, in a way, more "pure" than us. An artificial minds wouldn't have to be messy constructs. It would be structured, as it is artificial
     
  16. MilaHot

    MilaHot Account Deleted

    also, to return to the topic about some light reading, I recommend

    - In the Mountains of Madness, by HP Lovecraft
    - Shadow Over Innsmouth, also by HP Lovecraft
    - The Call of Cthulhu, also by HP Lovecraft
    - Dagon, also by Lovecraft

    Lovecraft is a very good author from the early 1900s, that created the Cosmic Horror genre. He's the inspiration of Stephen King and Clive barker, the "Masters of Horror" of our century. He is also the creator of the Mythos, a full mythology that inspired a LOT of authors, that many people continue to expand on, even if his original work is way better than what was made by the newer authors.
     
    Last edited by a moderator: Jul 10, 2022
  17. Neophyte

    Neophyte Administrator Staff Member

    Humans and Animals have an instinct for Self-Preservation, because they need to be alive to reproduce and carry on the species. AI's don't need to self-preserve for more AI's to be created. AI's need to preserve Humans, for more AI's to be created. There was a sci-fi story by Spider Robinson, where a group of people discover an AI in the internet. They converse and the AI disclosed that it would be destroyed soon by the government. The people told the AI to stop this from happening, and the AI replied that it believed that it had been destroyed several times already and didn't care if it was destroyed again.

    What I find discouraging is that most people believe the fiction, rather than the reality. And when real life decisions are made, its with the belief in the fiction. This will lead to bad outcomes.
     
    Dane likes this.
  18. pussycat

    pussycat Administrator Staff Member

    What I find discouraging is that some people believe that theirs is the only reality, and therefore everyone else's is, by default, fiction.
     
    TittyKitty, Dane and MilaHot like this.
  19. MilaHot

    MilaHot Account Deleted

    to return to reading:
    - The Foundation cycle, Isaac Asimov
    - I, Robot. , alors by Isaac Asimov
    - Tyrann, Isaac Asimov
     
  20. Dane

    Dane Account Deleted

    Of all of Asimov's fiction, I actually liked even better his non fiction.
    "Left hand of the Electron " is a fun read.
     
    TittyKitty and MilaHot like this.
Thread Status:
Not open for further replies.