Thursday, April 26, 2012

Seeds of structure

Now that we've got a list of points that are "dense" enough to spawn structure and some information about how much matter and energy they contain as well as their history, how do we use that information to introduce structure into the local area? This will take me back to what I mentioned earlier about chemistry and fusion of elements. In essence what we have is a whole lot of Element 1 contained in a very small area. We need to take that homogenous soup and turn it into a physical object.

Broad plan:

Step 1: We have to determine, based upon the properties and history of that area, what new elements will form there and how much energy it will take to form them—the idea here being that eventually we will run out of energy to form new elements. The stage/rate at which this happens will produce certain attributes of the eventual structure. The initial density of the area will have a lot to say about that, since a higher density will form heavier elements with less wasted energy.

Step 2: Based upon the elements resulting from Step 1, we determine what sorts of chemical bonding will take place and in what amounts. This refers back to the simple chemistry developed earlier—certain elements will bond with others.

Step 3: The resulting substances from Step 2 will give us an idea of the nature of the structure in question, its chemical composition, size, etc.

One of the biggest kinks to work out here is in the chemistry. Likely I will need to go back and add some more sophistication to that system so that, while elements with the proper "numbers" will bond with certain others, there needs to be some preference in their bonding so that elements will bond in a path-of-least-resistance sort of way.

Sunday, April 15, 2012

From clouds to rain

Now, finally, we're getting to the good parts. We have a map of a local area, or cell, with information regarding the amount of matter and energy available to do useful things... but getting from here to there is going to be a lot of work.

The first thing we need is a way to determine A) what elements are present in our cell, B) in what densities and C) at what locations. In order to determine that, we need to know how much energy is required to form elements, so it's back to chemistry.

In the real world, vast amounts of energy in the form of heat and pressure are required to fuse elements—14 million degrees Fahrenheit plus the gravity at the center of a star to fuse two hydrogen atoms together. We're not going to do anything like that here. Instead, we're going to simply use probability ranges to help determine distribution (with, of course, some randomness thrown in) and then the densities present in the local map we already have to determine how much fusion into larger elements will occur.

To simplify things, at least for now, we will first make the decision that we've already expended all the energy necessary to create a galaxy full of the simplest element in our periodic table, meaning that the global and local mass-energy values are what we have left over from that (implicit) process. So instead of just looking at the real universe's elemental abundance and dividing things up based on that, let's just say our entire local map is made up of Element 1. At first.

Now we need to examine the local map very carefully. We need to find concentrations of E1, their constituent values and their areas, which will help us start "fusing" them together into heavier elements. How do we find these concentrations?

To solve this problem, what I've done is created an "accretion algorithm." It's taken me about a month of weekend spare-time work to iterate, evolve and implement. Undoubtedly, I will go back and fine-tune it, but it seems to work well enough to move forward. The idea behind (astrophysical) accretion is the accumulation of celestial matter via the forces of gravity. As more material accumulates, it is able to attract more surrounding material, creating a feedback loop from which, eventually, enough material gathers to form planets, stars, etc. I can't directly simulate this process, not even in two dimensions and not even in a finite problem space—it would take trillions of floating point operations to even begin to see usable results. Instead, I take a few shortcuts using the information I already have in the local cell about where most of the material in the cell will most predictably wind up. After a successful run through (which typically takes a few seconds), roughly 97-99% of the existing material in the cell ends up in the brightest (densest) areas in differing concentrations. This material accumulation takes place in such a way that I record information about it, which will allow me to characterize the different accumulation points--the ones that gathered the most material the quickest could become stars or black holes, while the average areas could become planets, etc.

Sadly, there is nothing compelling, visually, to show for it yet: just a black image with a few white dots here and there. It's the information recorded from the process that will allow me to begin creating celestial bodies—bodies that will not simply be static fixtures, but bodies made of and born from material spawned from an internally-natural and consistent process and which are completely interactive. A gas giant will have harvestable gas. A world of hydrocarbons will have hydrocarbons—usable by anyone who can land there safely and extract it. These will not just be descriptions of static decorations and visual interest. Within the confines of Flat Galaxy, they will be real.

Monday, February 27, 2012

The material density map

Now we've got something (a fractal-noise image) that reasonably approximates, for our purposes, the elemental-matter-energy density of a galaxy. This would mean that a 1000x1000 pixel image represents an entire galaxy, so if we were making a galaxy the size of the Milky Way each pixel would represent roughly 600 trillion miles or 9.5×10^17 kilometers of space. That's a lot. This is only a proof-of-concept, and I'm not ready to start diving into that kind of scale. I'm going to reduce our galaxy size to something a little more manageable. Let's say about 1/1,000,000,000,000,000th of that size. This is a galaxy about 600 miles or 1000 kilometers across where each pixel represents about 0.6 miles squared (1 km squared). Everything in this galaxy will (probably) be scaled down appropriately: planets, stars, etc.

(Note: I can generate density maps larger than 1000x1000, which would give us much more fine-grained detail about the galaxy, but this is just a starting point. Plus, increasing the size of the image causes pattern generation to take exponentially longer amounts of time and currently results in images that are usually too dim to be very usable. This is something I'll come back to address later.)

Next we need a starting number. This number will be a reference point to the amount of energy and mass that exists in this galaxy (remember they're the same thing both in reality and, very transparently, in our model). We can produce this matter-energy (ME) number by simply totaling the values of each point in our density map. Here's an example number: 32329096.

Now we need to take our density map and divide it up. Each pixel is 1 sq. km, which we need to divide into a bunch of more fine-grained units--basically what will become tiles on a local area map, or cell. A square meter is obviously a good first choice. This means that each pixel on our density map represents a cell of 1 sq. km. * 1000 meters/km = 1,000,000 meters squared. This is a little more than 1/100th of the size of the entire Manhattan area--per pixel, one million times, which means our galaxy is about 1/10th the size of the entire US.

So, say we're examining a pixel in the dead-center of the density map. We can estimate its individual ME value at about 32329096 / (1000*1000) = ~32.33. This small number is the amount of matter and energy we have available to distribute around the cell the pixel on the density map represents. For the time being, we're not going to allow values to distribute across boundaries, so each cell will be in a sense self-contained.

Now the noise functions mentioned in the previous post become really useful. We can generate a 1000x1000 pixel (these represent meters, now) noise map and scale it to our cell's ME value as it relates to the maximum (and minimum) ME values of the entire galaxy, giving us a proportionate and appropriately random (or determined, if we want to direct it a bit) distribution of matter and energy in the cell.

What we need is a noise map that, when we total all its values, equals the local ME value. If our local ME value is 32.33, then we want to produce a noise map where each point's value is, on average, equal to 32.33 / (1000*1000) = 0.00003233. We can do this by producing a random noise map with random values and using a little math to map the values to our desired range. This gives us a noise map with a total ME value equaling its corresponding ME value in the density map. And, if we produce an images from this by normalizing each point in the resulting noise map by the scaled maximum of any point (the density map point with the highest ME value), we can see that each cell has a "brightness" to match:

A cell with 10% of the highest cell's ME value

25%

50%

75%

But herein lies the rub. We can't do this ahead of time, because while each noise map only takes about half a second to produce (that's even using a fast cosine function during interpolation--using the standard Java one takes a second), we're producing one for each pixel on a density map of 1000x1000 pixels. Even if we filter out the pixels with zero values, at least ~50% of the pixels still have a value and it would take about 65 hours to produce every noise map (half that if we use less-pretty linear interpolation--and still far too long). That's apart from the fact that we certainly couldn't hold them all in memory, and they'd eat up about 2 terabytes of HD space. Nope. We need to produce them only as needed. I will begin to address this next.

Saturday, February 25, 2012

Blending

I wasn't totally satisfied with the layered fractals being the stepping-off point for a true material density map, so I made some revisions to the code. I spent several days delving into various noise functions and eventually came up with a fairly fast and simple method of blending the disparate-looking fractals using a sort of hybrid Perlin-noise-with-cosine-interpolation function. These are a few of the results:




A lot closer to what I'm looking for. The blending functions aren't perfect and still leave some of the undesired "webbing" between less dense parts of the image, but I can start from here. After basically spending the entire week playing around and with and digging into various noise-generation methods (Perlin, Simplex, midpoint-displacement), I've got a handle on to how to use them going forward for a lot of procedural stuff. Time well spent.

Sunday, February 19, 2012

Layered fractals

Per the previous post, here are a few images of layered fractals. Unfortunately, they're not as impressive as I'd hoped as it seems like when one fractal is bright, the other is dim. Rarely if ever are they both bright at the same time. The other problem is that they aren't as organic as I wanted them to be, since using different formulas for each results in images that don't line up nicely and using the same formula for each results in images that look too similar, enough that you often can't tell where one is vs. the other. Nevertheless, they're still pretty cool.

I also made a few more improvements to the fractal generation itself, most notably introducing multi-threading logic to the (embarrassingly parallel) pattern generation so that generating ten two-fractal images went from taking about 50 seconds on average to taking just 20 seconds on average. I'll take a 60% speed improvement anytime.

(Edit: I forgot to apply my previous optimization for those numbers. With both of my optimizations, the multi-threaded version takes 10 seconds to generate 10 two-fractal images while the one-thread version takes 30 seconds. With none of my optimizations, that same set took about 50 seconds to generate, a total improvement of about 80%).





And here's a couple single-fractal images I just thought were really awesome.



Saturday, February 18, 2012

Galaxies and fractals

Now we have a basic chemistry. What do we do with it? Well, we want to distribute atomic particles throughout our model. According to what criteria? We certainly wouldn't want to distribute it evenly, or our little universe would look the same everywhere. We don't want it to be completely random, since the whole idea of Flat Galaxy is to seed our models with order over a large scale. One thing that fulfills this criteria very nicely: fractals.

A fractal is a pattern that is a pattern that exhibits self-similarity. They're seen everywhere in nature, and there's even a competing theory to the Big Bang that matter distribution throughout the universe is based on a fractal pattern. Which means that our own Milky Way galaxy is in some sense a model of the pattern of the entire universe. Fractal patterns are generated by recursively (self-referentially) iterating over complex polynomial functions, taking the output of the previous iteration of the function as input for the next iteration of the function. The classic Julia set function is f(z) = z^2 + c where, basically, z represents a point on the complex plane (which is mapped to the image plane) and c represents a complex number that largely determines the behavior of the fractal. We simply iterate through the function for each point on the screen and if after X number of iterations the output has not diverged (based on some threshold value) to infinity, we draw it on the screen.

Another bonus to fractals is that for now, at least, there is no physics model in Flat Galaxy. Implementing physics would complicate things immensely, even in 2d, and would force me to spend large amounts of time writing collision detection code on a fairly immense scale. Most problematic is that it would eat up processing cycles I would prefer to be dedicated to more important things. We can gloss over the lack of physics in various other ways. Fractals are one of them. We don't have to simulate accretion disks or orbits or anything else. Instead, we can just use a probability map based on a fractal and the amount of energy required to form any given element to determine concentrations of elements. From there, we can narrow our focus to the elements themselves, allowing them to bond depending on their individual properties and proximity.

I grabbed a simple fractal generation algorithm off the internet and made a bunch of modifications. The first modification was to the output functionality so that the images produced would color-code based on the density of the points. The images look best in grayscale, since they look how you'd expect a top down image of a galaxy to look. Unfortunately, the images also take some time to generate (around 2 seconds each on a fairly new computer), so I made a small, simple optimization that gave a 40-50% speed improvement without any degradation of the fractal, provided the threshold value for the fractal in question is of a certain value (the lower the threshold value--which also has a proportional relationship to the "brightness" of the image--the less optimization can be done).

I also added functionality to superimpose two fractals, one on top of another, randomize the complex function used to generate the fractal(s), and to randomize the complex values that represent c.

Here are a few of the images generated by the program (I think these are all one-fractal images. I'll have to generate some decent two-fractal images, since I think I deleted most of them):






You might notice that the "orientation" of these patterns is somewhat similar. This is because the same function was used to generate most of them.

I'm still working on the generation process for the superimposed fractal set, since those ones tend to be far more hit or miss. Randomizing the functions and the c-value also generate a lot of useless images (which are usually automatically filtered out based on a couple threshold values), and my plan is to generate these images continuously on a web server and send them to the client, which can then pick and choose which images to use without having to generate them itself. I also may put some work into smoothing the "webbing" you can see around the areas in between the lighter areas.

Next is taking these images and creating a material density map.

Sunday, February 12, 2012

A procedurally generated chemistry

We can appropriately characterize chemistry as the interactions between basic elements. In the real world, elements are atoms. In Flat Galaxy, basic elements will be somewhat different. We're going to come up with a chemistry-creation scheme that contains enough randomness to give us something different every time but with enough structure that we can expect to be able to use some of the results.

Let's start by generating around 100 random numbers from 1-100:

89 31 64 4 60 ... 71 75 7 48 53

These are our elements. Think of them as atomic numbers (not atomic weights). We would expect to get each number about once on average, but in one trial we won't, and we're going to cut out duplicates. This leaves us with slightly fewer than our expected number (exactly how many is, I believe, a probabilistic "expected" function).

Now let's generate a semi-random "fixing" number in the lower half of our number range:

20

What is the point of this fixing number? It's entirely arbitrary, but it gives us a reference point from which we can tease out relationships between our elements. Why in the lower half of our number range? No reason, other than it'll give us interesting values. I actually picked 20 in this case because it's close to the number we would expect to get consistently with enough trials (remember, random number in the lower half of a range of numbers from 1 to slightly less than 100).

Now what? Well, from here we start examining similarities in the numbers based on breaking their values up into discrete chunks, kind of like how electrons in atoms are broken up into electron shells. The amount of "electrons" each shell can fit is determined by a function loosely based on the general 2n^2 value in real-life chemistry, where n is the shell number, but which also incorporates our fixing number from earlier. This gives us some interesting values for shell capacities, and I've even had it exactly mimic real life electron shell capacities on occasion.

Once our "elements" are given shells with capacities, we divide their "electrons" up into these shells and whatever is left over in the final shell is what that element has to make bonds with other elements. For example, if Element 30 has 12 electrons in its outermost shell which has a capacity of 20, we know that Element 30 will readily bond with any element looking to either give up 8 electrons (ie has 8 electrons in its outer shell) or any element looking to gain 12 electrons. From here, we can group our elements with similar bonding properties together, and even construct a "periodic table." Due to the randomness I've introduced into the system, this table ranges from narrow and deep to wide and shallow (based largely on the fixing number--larger fixing number means a wider, shallower table). It doesn't quite measure up to the real life periodic table, but there's a lot of subtlety it's still missing. Here are a couple examples, with similar elements in the same columns:

Number of elements targeted: 118
Number of elements actually generated: 101
Generated fix: 5
[ 1 ][ 3 ][ 4 ][ 5 ][ 10][ 11][ 12][ 13][ 14][ 15]
[ 2 ][ 7 ][ 8 ][ 9 ][ 25][ 26][ 27][ 38][ 39][ 30]
[ 16][ 17][ 18][ 19][ 35][ 36][ 47][ 48][ 49][ 40]
[ 20][ 22][ 33][ 24][ 45][ 46][ 57][ 58][ 59][ 50]
[ 31][ 32][ 43][ 34][ 55][ 56][ 67][ 68][ 69][ 60]
[ 41][ 42][ 53][ 44][ 65][ 66][ 77][ 78][ 99][ 80]
[ 51][ 52][ 73][ 54][ 75][ 76][ 87][108][109][ 90]
[ 61][ 72][ 83][ 64][ 85][ 86][ 97][118][   ][100]
[ 71][ 82][103][ 74][ 95][ 96][107][   ][   ][   ]
[ 81][ 92][113][ 84][105][106][117][   ][   ][   ]
[ 91][102][   ][ 94][115][116][   ][   ][   ][   ]
[101][112][   ][114][   ][   ][   ][   ][   ][   ]

Sorry about the font size and spacing here. It was necessary to make it fit.
Number of elements targeted: 118
Number of elements actually generated: 121
Generated fix: 12
[ 2 ][ 9 ][ 4 ][ 5 ][ 6 ][ 7 ][ 14][ 15][ 16][ 56][ 18][ 19][ 20][ 21][ 22][ 39][ 38][ 41][ 40][ 43][ 42][ 45][ 90][ 46]
[ 23][ 24][ 10][ 11][ 12][ 28][ 29][ 54][ 31][ 78][ 33][ 34][ 35][ 36][ 37][ 85][108][ 87][ 86][ 89][ 88][ 91][114][ 92]
[ 47][ 48][ 25][ 26][ 27][ 52][ 53][ 76][ 55][102][ 57][ 58][ 59][ 82][ 61][109][132][111][110][113][112][115][   ][116]
[ 62][ 63][ 49][ 50][ 51][ 67][ 75][100][ 77][126][ 79][ 80][ 81][130][ 83][133][   ][135][134][   ][   ][   ][   ][   ]
[ 68][ 70][ 64][ 65][ 66][ 74][   ][124][101][   ][103][104][105][   ][107][   ][   ][   ][   ][   ][   ][   ][   ][   ]
[ 69][ 94][ 71][ 72][ 73][ 98][   ][   ][125][   ][127][128][   ][   ][131][   ][   ][   ][   ][   ][   ][   ][   ][   ]
[ 93][118][ 95][ 96][ 97][122][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ]
[117][   ][119][120][121][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ][   ]

You can definitely see some patterns in the numbers which I'd like to mix up a bit, and it's not exactly scientific, but it's a start.