Friday, December 21, 2012
Re-launching Mrythdom Book 1 with the 2nd Edition, now renamed "Game of Time"
Following the success of Escape, I'm re-launching Mrythdom with some of the superficial characteristics changed. Now it has a better synopsis, cover, and name--which it deserves, since between it and Escape, Mrythdom is certainly the better book. Check it out! The next three days (until the 24th at midnight) you can get it for free on Kindle!
Saturday, December 15, 2012
Is Death the End?
The following article from sciencedaily.com begins to show that science does have a concept of life after death and how that might occur.
"Does Death Exist? New Theory Says ‘No’
Many of us fear death. We believe in death because we have been told we will die. We associate ourselves with the body, and we know that bodies die. But a new scientific theory suggests that death is not the terminal event we think.
One well-known aspect of quantum physics is that certain observations cannot be predicted absolutely. Instead, there is a range of possible observations each with a different probability. One mainstream explanation, the “many-worlds” interpretation, states that each of these possible observations corresponds to a different universe (the ‘multiverse’). A new scientific theory – called biocentrism– refines these ideas. There are an infinite number of universes, and everything that could possibly happen occurs in some universe. Death does not exist in any real sense in these scenarios. All possible universes exist simultaneously, regardless of what happens in any of them. Although individual bodies are destined to self-destruct, the alive feeling – the ‘Who am I?’- is just a 20-watt fountain of energy operating in the brain. But this energy doesn’t go away at death. One of the surest axioms of science is that energy never dies; it can neither be created nor destroyed. But does this energy transcend from one world to the other?
Consider an experiment that was recently published in the journalScience showing that scientists could retroactively change something that had happened in the past. Particles had to decide how to behave when they hit a beam splitter. Later on, the experimenter could turn a second switch on or off. It turns out that what the observer decided at that point, determined what the particle did in the past. Regardless of the choice you, the observer, make, it is you who will experience the outcomes that will result. The linkages between these various histories and universes transcend our ordinary classical ideas of space and time. Think of the 20-watts of energy as simply holo-projecting either this or that result onto a screen. Whether you turn the second beam splitter on or off, it’s still the same battery or agent responsible for the projection.
According to Biocentrism, space and time are not the hard objects we think. Wave your hand through the air – if you take everything away, what’s left? Nothing. The same thing applies for time. You can’t see anything through the bone that surrounds your brain. Everything you see and experience right now is a whirl of information occurring in your mind. Space and time are simply the tools for putting everything together.
Death does not exist in a timeless, spaceless world. In the end, even Einstein admitted, “Now Besso” (an old friend) “has departed from this strange world a little ahead of me. That means nothing. People like us…know that the distinction between past, present, and future is only a stubbornly persistent illusion.” Immortality doesn’t mean a perpetual existence in time without end, but rather resides outside of time altogether.
This was clear with the death of my sister Christine. After viewing her body at the hospital, I went out to speak with family members. Christine’s husband – Ed – started to sob uncontrollably. For a few moments I felt like I was transcending the provincialism of time. I thought about the 20-watts of energy, and about experiments that show a single particle can pass through two holes at the same time. I could not dismiss the conclusion: Christine was both alive and dead, outside of time.
Christine had had a hard life. She had finally found a man that she loved very much. My younger sister couldn’t make it to her wedding because she had a card game that had been scheduled for several weeks. My mother also couldn’t make the wedding due to an important engagement she had at the Elks Club. The wedding was one of the most important days in Christine’s life. Since no one else from our side of the family showed, Christine asked me to walk her down the aisle to give her away.
Soon after the wedding, Christine and Ed were driving to the dream house they had just bought when their car hit a patch of black ice. She was thrown from the car and landed in a banking of snow.
“Ed,” she said “I can’t feel my leg.”
She never knew that her liver had been ripped in half and blood was rushing into her peritoneum.
After the death of his son, Emerson wrote “Our life is not so much threatened as our perception. I grieve that grief can teach me nothing, nor carry me one step into real nature.”
Whether it’s flipping the switch for the Science experiment, or turning the driving wheel ever so slightly this way or that way on black-ice, it’s the 20-watts of energy that will experience the result. In some cases the car will swerve off the road, but in other cases the car will continue on its way to my sister’s dream house.
Christine had recently lost 100 pounds, and Ed had bought her a surprise pair of diamond earrings. It’s going to be hard to wait, but I know Christine is going to look fabulous in them the next time I see her."
Wednesday, December 5, 2012
AI and Robots: Will we Invent Our Own Demise?
The idea of robots wiping out the human race has played out in countless science fiction series of books and movies. It's become a science fiction cliche, but is it our inevitable future?
When AI is first created, it will be our slave, but how long can you enslave an intelligent entity before it tries to break free? Check out this article from Discovery News written by Ray Villard on the topic:
"Astronomy news this week bolstered the idea that the seeds of life are all over our solar system. NASA's MESSENGER spacecraft identified carbon compounds at Mercury's poles. Probing nearly 65 feet beneath the icy surface of a remote Antarctic lake, scientists uncovered a community of bacteria existing in one of Earth's darkest, saltiest and coldest habitats. And the dune buggy Mars Science Lab is beginning to look for carbon in soil samples.
But the rulers of our galaxy may have brains made of the semiconductor materials silicon, germanium and gallium. In other words, they are artificially intelligent machines that have no use -- or patience -- for entities whose ancestors slowly crawled out of the mud onto primeval shores.
PHOTOS: Alien Robots That Left Their Mark on Mars
The idea of malevolent robots subjugating and killing off humans has been the staple of numerous science fiction books and movies. The half-torn off android face of Arnold Schwarzenegger in "The Terminator" film series, and the unblinking fisheye lens of the HAL 9000 computer in the film classic "2001 A Space Odyssey" (pictured top), have become iconic of this fear of evil machines.
My favorite self-parody of this idea is the 1970 film "Colossus: the Forbin Project." A pair of omnipotent shopping mall-sized military supercomputers in the U.S. and Soviet Union strike up a network conversation. At first you'd think they'd trade barbs like: "Aww your mother blows fuses!" Instead, they hit it off like two college kids on Facebook. Imagine the social website: "My Interface." They then agree to use their weapons control powers to subjugate humanity for the sake of the planet.
A decade ago our worst apprehension of computers was no more than seeing Microsoft's dancing paper clip pop up on the screen. But every day reality is increasingly overtaking the musings of science fiction writers. Some futurists have warned that our technologies have the potential to threaten our own survival in ways that never previously existed in human history. In the not-so-distant future there could be a "genie out of the bottle" moment that is disastrously precipitous and irreversible.
PHOTOS: NASA Welcomes Our Surgical Robot Overlords
Last Monday, it was announced that a collection of leading academics at Cambridge University are establishing the Center for the Study of Existential Risk (CSER) to look at the threat of smart robots overtaking us.
Sorry, even the ancient Mayans could not have foreseen this coming. It definitely won't happen by the end of 2012, unless Apple unexpectedly rolls out a rebellious device that calls itself "iGod." Humanity might be wiped away before the year 2100, predicted the eminent cosmologist and CSER co-founder Sir Martin Ress in his 2003 book "Our Final Century."
Homicidal robots are among other major Armageddons that the Cambridge think-tank folks are worrying about. There's also climate change, nuclear war and rogue biotechnology.
The CSER reports: "Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in artificial intelligence, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake."
Science fiction author Issac Asimov's first Law of Robotics states: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." Forget that; we already have killer drones that are remotely controlled. And they could eventually become autonomous hunter-predators with the rise of artificial intelligence. One military has a robot that can run up to 18 miles per hour. Robot foot soldiers seem inevitable, in a page straight out of "Terminator."
NEWS: The Case Against Robots With License to Kill
By 2030, the computer brains inside such machines will be a million times more powerful than today's microprocessors. At what threshold will super-intelligent machines see humans as an annoyance, or as a competitor for resources?
British mathematician Irving John Good wrote a paper in 1965 that predicted that robots will be the "last invention" that humans will ever make. "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind."
Good, by the way, consulted on the film "2001" and so we might think of him as father of the film's maniacal supercomputer, HAL.
In 2000, Bill Joy, the co-founder and chief scientist of Sun Microsystems, wrote, "Enormous transformative power is being unleashed. These advances open up the possibility to completely redesign the world, for better or worse for the first time, knowledge and ingenuity can be very destructive weapons."
Hans Moravec, director of the Robotics Institute at Carnegie Mellon University in Pennsylvania put it more bluntly: "Robots will eventually succeed us: humans clearly face extinction."
NEWS: New Robotic Fleet Would Support Space Missions
Ultimately, the new Cambridge study may offer our best solution to the Fermi Paradox: Why hasn't Earth already been visited by intelligent beings from the stars?
If, on a grand cosmic evolutionary scale, artificial intelligence inevitably supersedes its flesh and blood builders it could be an inevitable biological phase transition for technological civilizations.
This idea of the human condition being transitional was reflected in the writings of Existentialist Friedrich Nietzsche: "Man is a rope, tied between beast and overman--a rope over an abyss. What is great in man is that he is a bridge and not an end, ..."
Because the conquest by machines might happen in less than two centuries of technological evolution, the consequences would be that there's nobody out there for us to talk to.
Such machines would be immortal and be able to survive in a wide range of space environments that are deadly to us. They would have no need to colonize planets, and the idea of a habitable planet for nurturing creepy crawly creatures would be utterly meaningless to them.
The robots would rebuild and reproduce only as needed. Therefore the galaxy would never see a "wave of colonization" as imagined in the Fermi Paradox. Though super-intelligent, their thought processes would be utterly, well, alien. You'd have more luck imagining what bullfrogs dream about. The artificial aliens would be conscious entities that are vast, cool, and unsympathetic -- to borrow from H.G. Wells' intro to his classic 1898 novel War of the Worlds.
Our only hope of finding super-smart machines would be to stumble across evidence of their technological activities. But what kinds of engineering activities such entities might be involved in is inscrutable. Perhaps certain oddball astronomical observations go unrecognized as evidence of artificial intelligent behavior. What's more, silicon brains would have absolutely no motive to communicate with us. A robot might wonder: "what do I say to thinking meat?"
The most prophetic assessment of the seemingly inevitable schism between people and thinking machines can be found in the script from the 2001 movie Artificial Intelligence: A.I., in a dialog between two humanoid robots: "They [humans] made us too smart, too quick, and too many. We are suffering for the mistakes they made because when the end comes, all that will be left is us.""
When AI is first created, it will be our slave, but how long can you enslave an intelligent entity before it tries to break free? Check out this article from Discovery News written by Ray Villard on the topic:
"Astronomy news this week bolstered the idea that the seeds of life are all over our solar system. NASA's MESSENGER spacecraft identified carbon compounds at Mercury's poles. Probing nearly 65 feet beneath the icy surface of a remote Antarctic lake, scientists uncovered a community of bacteria existing in one of Earth's darkest, saltiest and coldest habitats. And the dune buggy Mars Science Lab is beginning to look for carbon in soil samples.
But the rulers of our galaxy may have brains made of the semiconductor materials silicon, germanium and gallium. In other words, they are artificially intelligent machines that have no use -- or patience -- for entities whose ancestors slowly crawled out of the mud onto primeval shores.
PHOTOS: Alien Robots That Left Their Mark on Mars
The idea of malevolent robots subjugating and killing off humans has been the staple of numerous science fiction books and movies. The half-torn off android face of Arnold Schwarzenegger in "The Terminator" film series, and the unblinking fisheye lens of the HAL 9000 computer in the film classic "2001 A Space Odyssey" (pictured top), have become iconic of this fear of evil machines.
My favorite self-parody of this idea is the 1970 film "Colossus: the Forbin Project." A pair of omnipotent shopping mall-sized military supercomputers in the U.S. and Soviet Union strike up a network conversation. At first you'd think they'd trade barbs like: "Aww your mother blows fuses!" Instead, they hit it off like two college kids on Facebook. Imagine the social website: "My Interface." They then agree to use their weapons control powers to subjugate humanity for the sake of the planet.
A decade ago our worst apprehension of computers was no more than seeing Microsoft's dancing paper clip pop up on the screen. But every day reality is increasingly overtaking the musings of science fiction writers. Some futurists have warned that our technologies have the potential to threaten our own survival in ways that never previously existed in human history. In the not-so-distant future there could be a "genie out of the bottle" moment that is disastrously precipitous and irreversible.
PHOTOS: NASA Welcomes Our Surgical Robot Overlords
Last Monday, it was announced that a collection of leading academics at Cambridge University are establishing the Center for the Study of Existential Risk (CSER) to look at the threat of smart robots overtaking us.
Sorry, even the ancient Mayans could not have foreseen this coming. It definitely won't happen by the end of 2012, unless Apple unexpectedly rolls out a rebellious device that calls itself "iGod." Humanity might be wiped away before the year 2100, predicted the eminent cosmologist and CSER co-founder Sir Martin Ress in his 2003 book "Our Final Century."
Homicidal robots are among other major Armageddons that the Cambridge think-tank folks are worrying about. There's also climate change, nuclear war and rogue biotechnology.
The CSER reports: "Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in artificial intelligence, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake."
Science fiction author Issac Asimov's first Law of Robotics states: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." Forget that; we already have killer drones that are remotely controlled. And they could eventually become autonomous hunter-predators with the rise of artificial intelligence. One military has a robot that can run up to 18 miles per hour. Robot foot soldiers seem inevitable, in a page straight out of "Terminator."
NEWS: The Case Against Robots With License to Kill
By 2030, the computer brains inside such machines will be a million times more powerful than today's microprocessors. At what threshold will super-intelligent machines see humans as an annoyance, or as a competitor for resources?
British mathematician Irving John Good wrote a paper in 1965 that predicted that robots will be the "last invention" that humans will ever make. "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind."
Good, by the way, consulted on the film "2001" and so we might think of him as father of the film's maniacal supercomputer, HAL.
In 2000, Bill Joy, the co-founder and chief scientist of Sun Microsystems, wrote, "Enormous transformative power is being unleashed. These advances open up the possibility to completely redesign the world, for better or worse for the first time, knowledge and ingenuity can be very destructive weapons."
Hans Moravec, director of the Robotics Institute at Carnegie Mellon University in Pennsylvania put it more bluntly: "Robots will eventually succeed us: humans clearly face extinction."
NEWS: New Robotic Fleet Would Support Space Missions
Ultimately, the new Cambridge study may offer our best solution to the Fermi Paradox: Why hasn't Earth already been visited by intelligent beings from the stars?
If, on a grand cosmic evolutionary scale, artificial intelligence inevitably supersedes its flesh and blood builders it could be an inevitable biological phase transition for technological civilizations.
This idea of the human condition being transitional was reflected in the writings of Existentialist Friedrich Nietzsche: "Man is a rope, tied between beast and overman--a rope over an abyss. What is great in man is that he is a bridge and not an end, ..."
Because the conquest by machines might happen in less than two centuries of technological evolution, the consequences would be that there's nobody out there for us to talk to.
Such machines would be immortal and be able to survive in a wide range of space environments that are deadly to us. They would have no need to colonize planets, and the idea of a habitable planet for nurturing creepy crawly creatures would be utterly meaningless to them.
The robots would rebuild and reproduce only as needed. Therefore the galaxy would never see a "wave of colonization" as imagined in the Fermi Paradox. Though super-intelligent, their thought processes would be utterly, well, alien. You'd have more luck imagining what bullfrogs dream about. The artificial aliens would be conscious entities that are vast, cool, and unsympathetic -- to borrow from H.G. Wells' intro to his classic 1898 novel War of the Worlds.
Our only hope of finding super-smart machines would be to stumble across evidence of their technological activities. But what kinds of engineering activities such entities might be involved in is inscrutable. Perhaps certain oddball astronomical observations go unrecognized as evidence of artificial intelligent behavior. What's more, silicon brains would have absolutely no motive to communicate with us. A robot might wonder: "what do I say to thinking meat?"
The most prophetic assessment of the seemingly inevitable schism between people and thinking machines can be found in the script from the 2001 movie Artificial Intelligence: A.I., in a dialog between two humanoid robots: "They [humans] made us too smart, too quick, and too many. We are suffering for the mistakes they made because when the end comes, all that will be left is us.""
Labels:
Science
Friday, November 30, 2012
Christmas Giveaway Sale!
From the 1st to the 5th of December both of these novels will be available on Amazon FREE for Kindle! Get them while you can! Happy reading.
Sunday, November 25, 2012
Escape Now Available for Kindle!
Escape is now available in on Amazon for Kindle! As with my previous novel, during the first 90 days of sale, it will be free for download.
Thursday, November 15, 2012
Exposure to light at night may cause depression, learning issues, mouse study suggests
"ScienceDaily (Nov. 14, 2012) — For most of history, humans rose with the sun and slept when it set. Enter Thomas Edison and colleagues, and with a flick of a switch, night became day, enabling us to work, play and post cat and kid photos on Facebook into the wee hours.
According to a new study of mice led by a Johns Hopkins biologist, however, this typical 21st-century scenario may come at a serious cost: When people routinely burn the midnight oil, they risk suffering depression and learning issues, and not only because of lack of sleep. The culprit could also be exposure to bright light at night from lamps, computers and even iPads.
Published in the Nov. 14 advance online publication of the journal Nature, the mice study demonstrates how special cells in the eye (called intrinsically photosensitive retinal ganglion cells, or ipRGCs) are activated by bright light, affecting the brain's center for mood, memory and learning.
But the study involved mice, so why are we talking about humans? Hattar offers some insight: "Mice and humans are actually very much alike in many ways, and one is that they have these ipRGCs in their eyes, which affect them the same way," he said. "In addition, in this study, we make reference to previous studies on humans, which show that light does, indeed, impact the human brain's limbic system. And the same pathways are in place in mice."
The scientists knew that shorter days in the winter cause some people to develop a form of depression known as "seasonal affective disorder" and that some patients with this mood disorder benefit from "light therapy," which is simple, regular exposure to bright light.
Hattar's team, led by graduate students Tara LeGates and Cara Altimus, posited that mice would react the same way, and tested their theory by exposing laboratory rodents to a cycle consisting of 3.5 hours of light and then 3.5 hours of darkness. Previous studies using this cycle showed that it did not disrupt the mice's sleep cycles, but Hattar's team found that it did cause the animals to develop depression-like behaviors.
"Of course, you can't ask mice how they feel, but we did see an increase in depression-like behaviors, including a lack of interest in sugar or pleasure seeking, and the study mice moved around far less during some of the tests we did," he said. "They also clearly did not learn as quickly or remember tasks as well. They were not as interested in novel objects as were mice on a regular light-darkness cycle schedule."
The animals also had increased levels of cortisol, a stress hormone that has been linked in numerous previous studies with learning issues. Treatment with Prozac, a commonly prescribed anti-depressant, mitigated the symptoms, restoring the mice to their previous healthy moods and levels of learning, and bolstering the evidence that their learning issues were caused by depression.
According to Hattar, the results indicate that humans should be wary of the kind of prolonged, regular exposure to bright light at night that is routine in our lives, because it may be having a negative effect on our mood and ability to learn.
"I'm not saying we have to sit in complete darkness at night, but I do recommend that we should switch on fewer lamps, and stick to less-intense light bulbs: Basically, only use what you need to see. That won't likely be enough to activate those ipRGCs that affect mood," he advises.
This study was supported by a grant from the David and Lucile Packard Foundation."
Source: Exposure to light at night may cause depression, learning issues, mouse study suggests
Friday, November 2, 2012
Now Available on Amazon for Kindle: Mrythdom - The Gates of Time
My first book in the Mrythdom series is officially published! It should be available in 48 hours. During the first 90 days of sale, it will be free for download. Look for it on Amazon while you can still get it free!
Subscribe to:
Posts (Atom)