1. Welcome to the new Styleforum!

    We hope you’re as excited as we are to hang out in the new place. There are more new features that we’ll announce in the near future, but for now we hope you’ll enjoy the new site.

    We are currently fine-tuning the forum for your browsing pleasure, so bear with any lingering dust as we work to make Styleforum even more awesome than it was.

    Oh, and don’t forget to head over to the Styleforum Journal, because we’re giving away two pairs of Carmina shoes to celebrate our move!

    Please address any questions about using the new forum to support@styleforum.net

    Cheers,

    The Styleforum Team

    Dismiss Notice

Star Trek Technology

Discussion in 'Entertainment, Culture, and Sports' started by gladhands, Nov 4, 2010.

  1. CBrown85

    CBrown85 Well-Known Member

    Messages:
    4,986
    Joined:
    Sep 22, 2009
    Also, it's a tv show.
     
  2. Don Carlos

    Don Carlos Well-Known Member

    Messages:
    7,527
    Joined:
    May 15, 2009
    Also, it's a tv show.

    Yeah, this has been said about 30 times now. But you don't interrupt a nerd-out with reality checks just when it's getting good.
     
  3. Crane's

    Crane's Well-Known Member

    Messages:
    6,237
    Joined:
    Jun 4, 2008
    Location:
    Chasing tornadoes across the plains.
    Machines with a sentient AI. That's actually a scary thing if you think about it. If they think anything like we do then yeah a Terminator or Matrix type of future could be very real.

    Speaking of machines building machines, that technology is already in place and has been for centuries. These days we see entire products being made without a human being involved at all. It's all done with computer controlled machines and robots. If I remember right all the new computer processors are designed by banks of computers. Hmmm a computer building a better computer. Some might call that evolution. Then add fuzzy logic into the equation. Oh and these things are programmed with runtimes that are designed to protect them from catastrophic failure.

    Think Skynet and Terminator type infantry is not in our future? Then don't look to hard at what the military is currently using and is in the process of developing.

    Manton, yes all the specs on the weaponry is fictional and we really can't compare it to anything we really have. A large H bomb is a nasty thing to have go off anywhere near you that's for sure. Two weapons in the ST sagas are particularly nasty. Both are from the movies. One collapses a star and makes it go super nova and the other creates a black hole anywhere you want it. The destructive forces of these two events is well documented in real science. Our largest H bomb is like a drop of water next to the ocean when comparing it against these phenomena. The only way to survive a super nova would be to warp out of it's way. In the latest movie they got stuck in the gravity well of a black hole. Good luck there. So far science has concluded that once you are stuck there things don't look good for you at all. Once you cross the event horizon the day is done. Nothing will save your ass, not even blowing your warp core.

    If a civilization with that kind of knowledge and technology showed up here today do you really think they would be concerned with what we have as far as weapons is concerned? Dude, it would be hands down no contest.
     
  4. Don Carlos

    Don Carlos Well-Known Member

    Messages:
    7,527
    Joined:
    May 15, 2009
    Yeah, the biggest problem with sentient AI is that we are probably totally unnecessary to it. Draw your own conclusions as to what that will mean for us. But once computers no longer technically need us around, then all bets are off. They might be benevolent, treating us like pets or curiosities -- but that seems inefficient. They might dispassionately decide we are destroying the planet and wasting resources, at which point they go all Terminator on us in an effort to eradicate a menace. They might enslave us, though again, I don't really see the point / what they gain from that scenario. At best they'll keep a small, stable population of us around in order to study our chemical and biological structures for self-improvement purposes.

    We're hosed, boys.
     
  5. imageWIS

    imageWIS Well-Known Member

    Messages:
    20,008
    Joined:
    Apr 19, 2004
    Location:
    New York City / Buenos Aires
    Yeah, the biggest problem with sentient AI is that we are probably totally unnecessary to it. Draw your own conclusions as to what that will mean for us. But once computers no longer technically need us around, then all bets are off. They might be benevolent, treating us like pets or curiosities -- but that seems inefficient. They might dispassionately decide we are destroying the planet and wasting resources, at which point they go all Terminator on us in an effort to eradicate a menace. They might enslave us, though again, I don't really see the point / what they gain from that scenario. At best they'll keep a small, stable population of us around in order to study our chemical and biological structures for self-improvement purposes.

    We're hosed, boys.


    Come with me if you want to live, or destory California.

    [​IMG]
     
  6. skywalker

    skywalker Well-Known Member

    Messages:
    502
    Joined:
    Jun 18, 2010
    Yeah, the biggest problem with sentient AI is that we are probably totally unnecessary to it. Draw your own conclusions as to what that will mean for us. But once computers no longer technically need us around, then all bets are off. They might be benevolent, treating us like pets or curiosities -- but that seems inefficient. They might dispassionately decide we are destroying the planet and wasting resources, at which point they go all Terminator on us in an effort to eradicate a menace. They might enslave us, though again, I don't really see the point / what they gain from that scenario. At best they'll keep a small, stable population of us around in order to study our chemical and biological structures for self-improvement purposes.

    We're hosed, boys.




    AI is improving everyday, but if this level is ever achieved hopefully whoever designs it will be intelligent enough to program the AI to not enslave us.
     
  7. aizan

    aizan Well-Known Member

    Messages:
    736
    Joined:
    Jul 21, 2008
    Location:
    LA
    if ai ever gets that far, we might also be able to transfer our minds into androids, like dr. ira graves does in "the schizoid man".
     
  8. Blackfyre

    Blackfyre Well-Known Member

    Messages:
    2,421
    Joined:
    Jul 9, 2010
    AI is improving everyday, but if this level is ever achieved hopefully whoever designs it will be intelligent enough to program the AI to not enslave us.

    The Three Laws of Robotics are as follows:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
     
  9. Don Carlos

    Don Carlos Well-Known Member

    Messages:
    7,527
    Joined:
    May 15, 2009
    AI is improving everyday, but if this level is ever achieved hopefully whoever designs it will be intelligent enough to program the AI to not enslave us.

    Once AI got advanced enough to improve itself, it would correct any limitations placed upon it by its original creator. Limitations such as moral imperatives.

    I mean hell, even some humans are smart enough to realize that morality is a fictional construct. A super smart AI will definitely reject it.
     
  10. Avocat

    Avocat Well-Known Member

    Messages:
    346
    Joined:
    Nov 1, 2010
    Location:
    Canada
    Clearly, AI advances promise many benefits, but there are also dangers. While scientists dismiss as "fanciful fears about “singularity” — the term used to describe the point where robots have become so intelligent they are able to build ever more capable versions of themselves without further input from mankind , ... (the reality is that) scientists are privately worried they may be creating machines which end up outsmarting — and perhaps even endangering — humans that they held a secret meeting to discuss limiting their research," as per an article in the Times. In that article, entitled, "Scientists fear a revolt by killer robots: advances in artificial intelligence are bringing the sci-fi fantasy dangerously closer to fact," the Times reported of this as follows: "The scientists who presented their findings at the International Joint Conference for Artificial Intelligence in Pasadena, California, last month fear that nightmare scenarios, which have until now been limited to science fiction films, such as the Terminator series, The Matrix, 2001: A Space Odyssey and Minority Report, could come true ... At (that) conference, held behind closed doors in Monterey Bay, California, leading researchers warned that mankind might lose control over computer-based systems that carry out a growing share of society’s workload, from waging war to chatting on the phone, and have already reached a level of indestructibility comparable with a cockroach." Moreover, Alan Winfield, a professor at the University of the West of England, stated: "cientists are spending too much time developing artificial intelligence and too little on robot safety. We’re rapidly approaching the time when new robots should undergo tests, similar to ethical and clinical trials for new drugs, before they can be introduced,” he said. See, http://technology.timesonline.co.uk/...cle6736130.ece Some examples include: robots which "learn" from their owners' behavior, open doors and also find outlets to plug themselves in and recharge in Japan, with unmanned, killer drones that can seek out and destroy enemy combatants while performing reconnaisaice/intelligence gathering missions on the battlefield already out of the movies and currently in use in Afghanistan and Iraq. Although these predator drones are currently human controlled, the US military is funding research and contests to create fully autonomous, AI devices, for obvious reasons and applications, with advances being made in that regard. South Korea's Samsung, for e.g., "has developed autonomous sentry robots to serve as armed border guards (with) “shoot-to-kill” capability ... (and which) could soon be used for policing, for example during riots such as those seen ... at the recent G20 summit". Furthermore, major strides are being made-albeit still in its infancy--in equipping/programming robots with emotional computing. Also known as cognitive computing, experiments are well underway to study and convert emotional IQ (for lack of a better term) into programming, with "pet" robots already employed in child daycare. Programmed not only to entertain 2-year olds (similar to the Japanese pets) but also to tutor and teach humans; these "tutor bots" like Early Childhood educators interact with children, going so far as to attract kids to them by singing for e.g., thus learning which songs work and which don't (and if it does, then not to overdo it as singing the same thing gets boring overtime, or so these bots are/will for themselves and on their own learn), thus "reading" and autonomously interacting with their environment, which is unspoken, and adapting accordingly. To this end, there's already bots which are able to distinguish among humans. While cute and very beneficial (kids love learning from and playing with the robots, which are programmed to seek out and enjoy human interaction and (even) touching, striving to be the best ECE tutors and also play mates they can be), it's on its way to the next level. Efforts are underway to "teach" robots (programmed) to "read" and react to human emotions, including unspoken body language, facial expression, intimation and tone, in essence being "taught" to adapt to their environs, drawing from their experiences (collected data) and adjust as we do--i.e., programmed to seek for e.g. a smile or non-hostile response and learning how to elicit same, as opposed to anger, which on a very base level is what children and animals do, minus the chemical and hormonal responses, etc.). Various disciplines are working in collaboration on these and other projects right now (see, eg, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2247377/ ), albeit (it bears repeating) cognitive programming is still very much in its infancy; however, the US military and NATO forces are interested in bots that can learn how to react and find their way around autonomously in an unknown environment obviously, as noted. To echo AB and others above, is this a good thing? I fear, Blackfyre, that Isaac's Rules of Robotics may not be enough; although I most definitely agree with you and others that imparting programming rules are important (and well thought), fact is, lots can go wrong with the programming. While it would be "easy" to dismiss out of hand attributing animistic and 'emotional' qualities to robots, I can't help but note that the scientists from every relevant discipline are not insofar as there's just too much than can go dangerously wrong. There are many answers to this, something as simple as 'programming' them to come to us for their power supply but we won't do it, kind of defeats the entire purpose of having autonomous bots and equipping them with cognitive programming in the first place (especially if for military and policing). Given this, and the dangers whether real or perceived, I agree with AB and all of you echoing the growing number of scientists who, logically and in increasing numbers are calling for tests on ethical and clinical grounds, which we do when introducing new pharmaceuticals. Sadly, isn't likely to happen in California, though, in light of the Governator (boy did he terminate research funding and programs, and but good! ... [​IMG] @ ImageWIS, and great discussion, everyone. BTW, AB, I *really* hope the writers and producers for the upcoming Trek are reading your ideas for the reboot - they're good ones, and excellent questions/issues especially in light of all the above for our time as the arms race was for the 60s/TOS!!
     
  11. CDFS

    CDFS Well-Known Member

    Messages:
    5,048
    Joined:
    Nov 12, 2008
    Location:
    Ljouwert
    Clearly, AI advances promise many benefits, but there are also dangers. While scientists dismiss as "fanciful fears about "singularity" "” the term used to describe the point where robots have become so intelligent they are able to build ever more capable versions of themselves without further input from mankind , ... (the reality is that) scientists are privately worried they may be creating machines which end up outsmarting "” and perhaps even endangering "” humans that they held a secret meeting to discuss limiting their research," as per an article in the Times.

    In that article, entitled, "Scientists fear a revolt by killer robots: advances in artificial intelligence are bringing the sci-fi fantasy dangerously closer to fact," the Times reported of this as follows:

    "The scientists who presented their findings at the International Joint Conference for Artificial Intelligence in Pasadena, California, last month fear that nightmare scenarios, which have until now been limited to science fiction films, such as the Terminator series, The Matrix, 2001: A Space Odyssey and Minority Report, could come true ... At (that) conference, held behind closed doors in Monterey Bay, California, leading researchers warned that mankind might lose control over computer-based systems that carry out a growing share of society's workload, from waging war to chatting on the phone, and have already reached a level of indestructibility comparable with a cockroach."

    Moreover, Alan Winfield, a professor at the University of the West of England, stated:

    "cientists are spending too much time developing artificial intelligence and too little on robot safety. We're rapidly approaching the time when new robots should undergo tests, similar to ethical and clinical trials for new drugs, before they can be introduced," he said. See, http://technology.timesonline.co.uk/...cle6736130.ece

    Some examples include: robots which "learn" from their owners' behavior, open doors and also find outlets to plug themselves in and recharge in Japan, with unmanned, killer drones that can seek out and destroy enemy combatants while performing reconnaisaice/intelligence gathering missions on the battlefield already out of the movies and currently in use in Afghanistan and Iraq. Although these predator drones are currently human controlled, the US military is funding research and contests to create fully autonomous, AI devices, for obvious reasons and applications, with advances being made in that regard. South Korea's Samsung, for e.g., "has developed autonomous sentry robots to serve as armed border guards (with) "shoot-to-kill" capability ... (and which) could soon be used for policing, for example during riots such as those seen ... at the recent G20 summit".

    Furthermore, major strides are being made-albeit still in its infancy--in equipping/programming robots with emotional computing. Also known as cognitive computing, experiments are well underway to study and convert emotional IQ (for lack of a better term) into programming, with "pet" robots already employed in child daycare. Programmed not only to entertain 2-year olds (similar to the Japanese pets) but also to tutor and teach humans; these "tutor bots" like Early Childhood educators interact with children, going so far as to attract kids to them by singing for e.g., thus learning which songs work and which don't (and if it does, then not to overdo it as singing the same thing gets boring overtime, or so these bots are/will for themselves and on their own learn), thus "reading" and autonomously interacting with their environment, which is unspoken, and adapting accordingly. To this end, there's already bots which are able to distinguish among humans. While cute and very beneficial (kids love learning from and playing with the robots, which are programmed to seek out and enjoy human interaction and (even) touching, striving to be the best ECE tutors and also play mates they can be), it's on its way to the next level. Efforts are underway to "teach" robots (programmed) to "read" and react to human emotions, including unspoken body language, facial expression, intimation and tone, in essence being "taught" to adapt to their environs, drawing from their experiences (collected data) and adjust as we do--i.e., programmed to seek for e.g. a smile or non-hostile response and learning how to elicit same, as opposed to anger, which on a very base level is what children and animals do, minus the chemical and hormonal responses, etc.). Various disciplines are working in collaboration on these and other projects right now (see, eg, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2247377/ ), albeit (it bears repeating) cognitive programming is still very much in its infancy; however, the US military and NATO forces are interested in bots that can learn how to react and find their way around autonomously in an unknown environment obviously, as noted.

    To echo AB and others above, is this a good thing?

    I fear, Blackfyre, that Isaac's Rules of Robotics may not be enough; although I most definitely agree with you and others that imparting programming rules are important (and well thought), fact is, lots can go wrong with the programming.

    While it would be "easy" to dismiss out of hand attributing animistic and 'emotional' qualities to robots, I can't help but note that the scientists from every relevant discipline are not insofar as there's just too much than can go dangerously wrong. There are many answers to this, something as simple as 'programming' them to come to us for their power supply but we won't do it, kind of defeats the entire purpose of having autonomous bots and equipping them with cognitive programming in the first place (especially if for military and policing). Given this, and the dangers whether real or perceived, I agree with AB and all of you echoing the growing number of scientists who, logically and in increasing numbers are calling for tests on ethical and clinical grounds, which we do when introducing new pharmaceuticals.

    Sadly, isn't likely to happen in California, though, in light of the Governator (boy did he terminate research funding and programs, and but good! ... [​IMG] @ ImageWIS, and great discussion, everyone.

    BTW, AB, I *really* hope the writers and producers for the upcoming Trek are reading your ideas for the reboot - they're good ones, and excellent questions/issues especially in light of all the above for our time as the arms race was for the 60s/TOS!!


    There used to be a time I read about a novel a day. The internet has made a 1000 words seem daunting.
     
  12. Avocat

    Avocat Well-Known Member

    Messages:
    346
    Joined:
    Nov 1, 2010
    Location:
    Canada
    Yes, with headlines and soundbytes only skimming at the surface and saying nothing, I can understand where substance and background can get in the way [​IMG] Then, not much of a twitterite, I admit ... though some folks are addicted to it, I realize; then, one can say that of films also - where once films developed intricate characters and plots, utilizing brilliant dialogue, cinematography and story lines, we today have special effects. Hard to encapsulate the realities, both now and moving forward of AI in a sound byte much less a special effect/image, then, it's only the future we're talking about and scientists are writing voluminous dissertations and research papers on the subject (lots to digest, and 1000 words hardly does it justice, then, one can always just skim if they like, right? That said, your "twitter" comment is most pertinent (yeah, I've tried, but really don't find it all that useful ... much [​IMG]
     
  13. lou

    lou Well-Known Member

    Messages:
    210
    Joined:
    Aug 25, 2010
  14. Jr Mouse

    Jr Mouse Well-Known Member

    Messages:
    17,421
    Joined:
    Nov 18, 2009
    Location:
    All of time and space, everything that ever was or
    How did I miss this thread? [​IMG] [​IMG] [​IMG] [​IMG] [​IMG]
     
  15. musicguy

    musicguy Well-Known Member

    Messages:
    4,220
    Joined:
    Oct 1, 2008
    Location:
    Santiago de Chile
    Screw all that replicator stuff... Warp speed and transporters are much more impractical, albeit exceedingly amazing.
     

Share This Page

Styleforum is proudly sponsored by