Sunday, January 3, 2016

Mira the Supercomputer Builds Universes




Mira-the-supercomputer-builds-universes-60466b3896

mira8.png


Cosmology is the most ambitious of sciences. Its goal, plainly stated, is to describe the origin, evolution, and structure of the entire universe, a universe that is as enormous as it is ancient.


Surprisingly, figuring out what the universe used to look like is the easy part of cosmology. If you point a sensitive telescope at a dark corner of the sky, and run a long exposure, you can catch photons from the young universe, photons that first sprang out into intergalactic space more than ten billion years ago. Collect enough of these ancient glimmers and you get a snapshot of the primordial cosmos, a rough picture of the first galaxies that formed after the Big Bang.


Thanks to sky-mapping projects like the Sloan Digital Sky Survey, we also know quite a bit about the structure of the current universe. We know that it has expanded into a vast web of galaxies, strung together in clumps and filaments, with gigantic voids in between.


How do you follow a galaxy through nearly all of time? You build a new universe.


The real challenge for cosmology is figuring out exactly what happened to those first nascent galaxies. Our telescopes don’t let us watch them in time-lapse; we can’t fast forward our images of the young universe. Instead, cosmologists must craft mathematical narratives that explain why some of those galaxies flew apart from one another, while others merged and fell into the enormous clusters and filaments that we see around us today.


Even when cosmologists manage to cobble together a plausible such story, they find it difficult to check their work. If you can’t see a galaxy at every stage of its evolution, how do you make sure your story about it matches up with reality? How do you follow a galaxy through nearly all of time? Thanks to the astonishing computational power of supercomputers, a solution to this problem is beginning to emerge: You build a new universe.


In October, the world’s third fastest supercomputer, Mira, is scheduled to run the largest, most complex universe simulation ever attempted. The simulation will cram more than 12 billion years worth of cosmic evolution into just two weeks, tracking trillions of particles as they slowly coalesce into the web-like structure that defines our universe on a large scale.


Cosmic simulations have been around for decades, but the technology needed to run a trillion-particle simulation only recently became available. Thanks to Moore’s Law, that technology is getting better every year. If Moore’s Law holds, the supercomputers of the late 2010s will be a thousand times more powerful than Mira and her peers. That means computational cosmologists will be able to run more simulations at faster speeds and higher resolutions. The virtual universes they create will become the testing ground for our most sophisticated ideas about the cosmos.


Salman Habib is a senior physicist at the Argonne National Laboratory and the leader of the research team working with Mira to create simulations of the universe. Last week, I talked to Habib about cosmology, supercomputing, and what Mira might tell us about the enormous cosmic web we find ourselves in.


Help me get a handle on how your project is going to work. As I understand it, you’re going to create a computer simulation of the early universe just after the Big Bang, and in this simulation you will have trillions of virtual particles interacting with each other — and with the laws of physics — over a time period of more than 13 billion years. And once the simulation has run its course, you’ll be looking to see if what comes out at the end resembles what we see with our telescopes. Is that right?


Habib: That’s a good approximation of it. Our primary interest is large-scale structure formation throughout the universe and so we try to begin our simulations well after the Big Bang, and even well after the microwave background era. Let me explain why. We’re not sure how to simulate the very beginning of the universe because the physics are very complicated and partially unknown, and even if we could, the early universe is structurally homogenous relative to the complexity that we see now, so you don’t need a supercomputer to simulate it.


Later on, at the time of the microwave background radiation, we have a much better idea about what’s going on. WMAP and Planck have given us a really clear picture of what the universe looked like at that time, but even then the universe is still very homogenous — its density perturbations are something like one part in a hundred thousand. With that kind of homogeneity, you can still do the calculations and modeling without a supercomputer. But if you fast forward to the point where the universe is about a million times denser than it is now, that’s when things get so complicated that you want to hand over the calculations to a supercomputer.


Now the trillions of particles we’re talking about aren’t supposed to be actual physical particles like protons or neutrons or whatever. Because these trillions of particles are meant to represent the entire universe, they are extremely massive, something in the range of a billion suns. We know the gravitational mechanics of how these particles interact, and so we evolve them forward to see what kind of densities and structure they produce, both as a result of gravity and the expansion of the universe.


So, that’s essentially what the simulation does: It takes an initial condition and moves it forward to the present to see if our ideas about structure formation in the universe are correct.


At the largest scales, how would you describe the structure of the universe as we see it today through our telescopes? Some say it’s web-like or that it’s composed of sheets of filaments — are those accurate descriptions?


Habib: That’s a very accurate way to think about it. People often conceive of it as a cosmic web, a picture that dates back to the Soviet physicist Yakov Zel’dovich who had this very deep insight about how structure forms in the universe. The idea is that initially the universe is very smooth, very homogenous, with few perturbations.


If you looked at it, you wouldn’t see much. But then as the universe expands, gravity causes matter to attract and to form local structures. The first structures to form are sheets, and where the sheets intersect you get filaments, and where the filaments intersect you get clumps. As time progresses, you can start to see the basic structure where you have this enormous web of voids, filaments and clumps. The sheets are very thin, very ephemeral, so it is much harder to see them, but the rest of the structure is very sharp and clear, especially as seen by the Sloan Digital Sky Survey.



universe.png


Have previous simulations been successful in producing the structure we see with telescopes?


Habib: Oh yes, the web-like structure is completely borne out by simulations. Simulations date back a long way; one of the earliest — the one I consider to be the precursor to modern simulations — was done in the late 1960s by the Canadian-American cosmologist Jim Peebles. He spent a summer at Los Alamos and while he was there he was able to perform a 300-particle simulation, which is of course quite small compared to today’s simulations. People have been running larger and larger simulations since then, and when they do, they consistently see this same web-like structure.


Is there an aesthetic component to these simulations? Can you actually see galaxies forming?


Habib: There is definitely an aesthetic component. We’re looking at an actual image of the structure, but you can’t see galaxies forming. It’s not quite that granular, and besides these are gravity-only simulations. For large-scale structure simulations, gravity is all you need to understand how you get sheets and filaments and clumps. If you want to see how galaxies form, you need the rest of physics — you need individual atoms, angular momentum, gas physics, etc. These are enormously complicated processes and we don’t yet have the computing power to run them on the scale of the entire universe. There are people who do simulate galaxy formation with supercomputers, but they have to do it over much smaller volumes of the universe.


Some of the inflationary models for the early Universe suggest a process that would continue to produce additional universes, perhaps with their own laws of physics. Certainly that’s not something we could model with a computer now, but might it be someday?


Habib: It could be, but we’d have to understand the theory better. The theory you’re talking about, eternal inflation, has two issues. First, the sheer difficulty of the calculations, but second the theory itself is not well defined yet. I would argue that, at the moment, theories like eternal inflation are in the realm of speculative physics. There are models for eternal inflation — I’ve written papers about them, and so have a lot of other people — but if you go and look at the equations, they are not very well defined.


That’s because when you talk about producing new universes, you’re talking about the intersection between quantum mechanics and gravity, and we don’t yet have a satisfactory theory of quantum gravity. We have candidates for what might someday morph into an satisfactory theory, but we can’t say for sure. The multiverse idea is interesting and provocative, but it’s a work in progress.


What happens when you let the models run past the present? Time-wise, what’s the farthest that someone has taken one of these simulations?


Habib: That’s an interesting question. We usually just stop the simulations at the present, because we’re still trying to understand how we got here, but there’s no particular reason to stop them. You can continue to run them forward and some people have done that in the past. What they’ve found is that if you run the universe far enough into the future it expands into a pretty bleak place.


All the matter runs away from each other, because space is being created at an ever-accelerating rate. In fact, people often joke that this is the right time to do cosmology because trillions of years from now we won’t be able to see anything: Everything will have receded out of sight.


So yes, we can run these simulations into the future, but it’s not that interesting. The universe is much more interesting now than it’s going to be in the future, provided that this accelerating expansion phase of the universe continues as we expect it to.


Your project was made possible by the development of the Mira supercomputer, the third-fastest computer in the world. Can you describe what makes Mira so special?


Habib: Let me say one or two things about supercomputers. Every few years supercomputers become about 10 times more powerful, so with each new generation you get quite a leap in capabilities. Not only do supercomputers get faster, but they get much larger, which allows you to run much bigger problems. What distinguishes a supercomputer like Mira from a normal computer is that it has a very large number of computational units.


A simplified way to think about it is to imagine having a million laptops that you’ve networked in such a way that they’re able to communicate with each other very quickly. Now you split your problem up into a million chunks and you give each chunk to a laptop, and the laptop works on its chunk and passes the data around as needed, and eventually your problem gets solved.


What makes all of these simulations possible is the sheer size of the supercomputers. For example, the Mira has close to a petabyte of memory. If you tried to do a simulation like this on a normal computer, you wouldn’t be able to fit it, and even if you could fit it, if you tried to run it, it would never finish. With Mira, we’re able to complete these universe simulations in the span of a week or two.


I know that supercomputers like Mira are used for all kinds of scientific experiments outside of cosmology. What else will it be used for in the next few years?


Habib: There are a large number of applications. People use supercomputers to determine the properties of materials, to understand combustion, to figure out how a flame works. They’re also used to determine fluid dynamics; for instance you might want to know how air flows around the wing of an aircraft, and you can calculate that quite precisely with a supercomputer. In astrophysics there are all sorts of applications; people use supercomputers to study intergalactic gas, the formation of stars, supernovae and so on. 



argonne2.jpeg


Moore’s Law tells us that processing power increases exponentially. Assuming the next few years bring a huge leap in processing power, would you rather use it to perform these experiments quicker, or at a higher complexity?


Habib: There’s a difficulty that we’re running up against with Moore’s Law. If you want to get more performance out of these computers, you can do it two ways: You can make the computational units switch faster or you can add more computational units. It turns out that if you want to make the units switch faster, you need more power. We’ve reached a limit where we can, in principle, build a faster machine, but it would cost us many gigawatts of power to actually run it and we simply can’t afford to do that. So conventional Moore’s Law is already reaching a breaking point because of this power barrier.


Now if you want to solve this problem by reducing the amount of power used by the switches in the computer, then you have to reduce the voltage, but if you reduce the voltage you get more errors. So the next generation of computers — in five years or so — promises to be very different. We may have to program them in different ways, and we may have to think about how to power them differently, or how to correct for errors. It’s going to be interesting and in some senses it’s going to be more painful than it is now.


Now around 2018 or 2020, somewhere around that time scale, these machines are supposed to become a thousand times faster than they are right now. There are a lot of studies being done to figure out what you could do with a machine like that, but whether we’ll actually get there, I don’t know. It’s not yet clear that there will be investment in the technologies we need to get us there.


There is some hope that there will be investment, because supercomputer simulations are increasingly being used outside the basic sciences. Supercomputers are playing a big role in the development of new technologies. For instance, you can design a diesel engine without ever building a prototype simply by simulating it with a supercomputer.


It seems like a large, sped up version of one of these universe simulations would be perfect as a piece of public art. Has anyone tried anything like that?


Habib: That’s an interesting thought. The question is how you would actually show it, because it is a dynamic object. You could have it as a projection like you see at planetariums and that would be very beautiful, but really you have to see it in three dimensions. Until you see it in three dimensions you cannot appreciate how beautiful the structure is. What would be neat is a large-scale hologram — something where you could actually see the structure pop up around you. That would really be something to see.



This article originally published at The Atlantic
here


Read more: http://mashable.com/2012/09/25/ibm-mira-supercomputer/




Mira the Supercomputer Builds Universes

Apps and Software, IBM, space, supercomputer, tech, world

No comments:

Post a Comment