by HarryStottle » Tue Oct 25, 2005 12:11 am
My play is. But the underlying science is real. Simulated universes are already here, albeit at somewhat lower resolution and scale than even a holodeck would require.
The holodeck will be version twenty something! We'll get to that around the same time we can backup the human mind.
Heard about the "Millennium Flythru"? If you haven't yet done it, go to , read the blurb and download , (120 mb avi and worth it. Trust me!). Turn off all the lights and play it full screen on the biggest and best monitor you've got access to. Preferably at half speed for the most dramatic effect. This too, is mindblowing.
(Select some suitable music for maximum effect - my personal recommendation is Sheila Chandra's Abonecronedrone. Better still, get pleasantly stoned first.)
It is literally a Simulation of the Universe featuring all the stars and galaxies we currently know about and then - using 25 terabytes of data and a few weeks on a supercomputer - they've created a frame by frame animation of 10 billion "particles" to show what it would be like if we could fly through roughly one quarter of the currently known universe at a few million times the speed of light (ignoring some of the obvious difficulties with that - like the fact that we wouldn't actually see any of the light!) The avi is the "movie" they've made of that animation. Absolutely stunning.
Meanwhile, back on terra firma, the models we create for weather forecasting are probably the most complex universe simulations use on a day to day basis and we all know how - despite having the worlds most powerful numbercrunchers dedicated to the task - how moderate their accuracy is. Nevertheless, between them, the Millennium Flythru and the Weather forecasting computers do demonstrate a "proof of principle" both for the concept of sims in general and my reverse calculation proposal in particular.
They are frequently run "in the past" to compare their forecasts with what really happened. This is akin to the "tweaking" stage in my previous. I don't know whether they've actually tried running their algorithms in reverse to see how quickly they diverge but there is no reason, in principle, why they couldn't.
The real question, of course, is whether atomic scale simulation will ever be possible. If so, then omortality is, in my view, inevitable.
The probable answer is that it's like to be purely a capacity problem. The data required to model a snapshot of any human brain has been calculated (see - for example - at around a petabyte (1000 Terabytes - 1 million Gb ) which is not imminent, but we're already creeping up on the terabyte. I've currently got a third of a terabyte in my PC and if I tot up the storage capacity on the family network, we're already close to a full terabyte.
When I left my Civil Service job back in the late 80s, my office employed 1500 staff and boasted a computer database capable of holding 11 Gb of data. The machines and drives that stored and access this data occupied an airconditioned space about half the size of a football pitch. Just 16 years later I can store nearly a hundred times as much on my home network. My 2 kilogram laptop holds more than 7 times as much data.
It is not unreasonable to expect a similar rate of progress over the next 16 - 20 years. If so, I would expect my then computer to be built in to my personal communicator and probably fitted invisibly into my ear. I suspect the storage will not be onboard, but stored centrally and accessed wirelessly about 10 times faster than I currently access the data on my SATA hard drive. I expect to have private access to about a petabyte of storage space. Thats probably enough to model a snapshot of my own brain but not enough for backups!
That's stage one. Stage two, within a dozen years of that stage, we should reach storage capacity sufficient to store a few hundred snapshots or "frames".
Then we have to figure how to reanimate the models. The AI boys will probably crack the problem in ways I can't even understand, but my own favourite is similar to the general simulation solution. capture 100 frames of the brain in action and you can calculate the 101st etc. I think re-animation might work something like that but that it might take even more data for the re-animated mind to "click in". It might take, say, 10 minutes worth of brain capture in order for the model to be re-run and take over where the organic version left off.
If that wild speculation is anything like the truth, then mere petabytes are nowhere near sufficient and we'd need somewhere close to 10 exabytes (10,000 petabytes) per person. That could take another 20-30 years.
But ONLY another 20-30 years.
In other words, if I'm right, then we'll be able to make digital backups of the human mind by about 2020. We'll be able to re-activate those digital minds, in a digital environment, by about 2050. That's how close I believe we are to omortality.
If you're new to this game, go and check out Transhumanism: .
After that, take a look at Ray Kurzweil's site:
Once you've assimilated his site in particular, you'll see my predictions as reasonably restrained! But you'll also be looking at - between those two sites - the main ideas which drive my vision of the future.