Radial function games
The simplest thing to do is to use a uniform random selection from 0 to That means every location from 0 to 19 is equally likely to be chosen. Most languages will include some function to generate random numbers uniformly. In Python it's random. Here's some Python code:. But suppose we wanted maps that had the treasures more likely to be on the left than the right? To do this we would use a non-uniform random selection. There are lots of ways to do this. One way is to choose a random number uniformly, then move it towards the left.
Here's some Python code for that:. However, that's not quite what I want. I want treasures to sometimes be on the right, but more often on the left. If it's 0, then 0 squared divided by 20 is 0. If it's 19, then 19 squared divided by 19 is But in between, if it was 10, then 10 squared divided by 19 is 5.
We've kept the range from 0 to 19, but we've moved the middle numbers like 10 over to the left. This kind of redistribution is a very useful technique on its own, and I've used square and square root and other functions in past projects. This site [1] has some common reshaping functions used for animations; hover over a function to see the demo.
Here's Python code that uses squaring:. Yet another way to move things to the left is to first randomly choose a range limit, then randomly choose a number from 0 to the range limit. If the range limit were 19 then we could get a number anywhere. If the range limit were 10 then we only get numbers on the left half. There are lots of tricks for taking uniform random numbers and turning them into non-uniform random numbers that have the properties you want. As the game designer you get to choose the distribution of the random numbers you want.
I've written an article about how to use random numbers for damage in role-playing games [2] where I give a bunch of tricks like this. In old TV sets, if you tuned to a channel that didn't have a station, you'd see random black and white dots on the screen. That's noise from outer space! On radios, if you tune to a channel that doesn't have a station, you hear noise I'm not sure if this comes from space or elsewhere.
In signal processing, noise is typically the unwanted aspect. In a noisy room it's harder to hear someone than in a quiet room. Audio noise is random numbers arranged in a line 1D. In a noisy image it's harder to see a pattern than in a clean image.
Image noise is random numbers arranged in a grid 2D. You can also have noise in 3D, 4D, etc. Although in most applications you're trying to subtract the noise, a lot of natural systems look noisy, so if you're trying to procedurally generate something that looks natural, you typically want to add noise.
Although real systems look noisy there's usually an underlying structure; the noise we add won't have that same structure, but it's much simpler than programming the simulation, so we use it and hope the end user doesn't notice.
This is a tradeoff that I'll talk about later. Let's look at a simple example of where noise is useful. Let's say we have a 1D map as before, but instead of a single treasure chest, we want to create a landscape of valleys, hills, and mountains. Let's start by using a uniform random selection at each location.
If random 1,3 is 1 we'll set it to a valley, if 2 set it to hills, and if 3 set it to mountains. I'm using random numbers to create a height map : at each location in the array, I store the height of the landscape. Here's Python code to create the landscape:. Hm, these maps look "too random" for our needs. Maybe we'd like larger areas of valleys or hills, and we'd also like mountains to be less common than valleys.
Earlier we saw that uniform selection of random numbers may not be what we want; there are times we want non-uniform selection. Can that help here? We could use some random selection where valleys are more likely than mountains:. That decreases the number of mountains but doesn't really show any interesting patterns. The problem is that non-uniform random selections change what happens in each location in isolation but instead we want something where the random selection in one location is related to to the random selections in nearby locations.
This is called "coherence". That's where noise functions come in. They give us a set of random numbers instead of one at a time. Here we want a 1D noise function to give us a sequence. Let's try a noise function that modifies a sequence of uniformly selected random numbers. There are lots of ways to do this but let's take the minimum of two adjacent numbers. If the original noise was 1, 5, 2, then the minimum of 1, 5 is 1, and the minimum of 5, 2 is 2.
So the resulting noise will be 1, 2. Note that it removed the high point 5. Also note that the resulting noise has one fewer value than the original. That means when we generate 60 random numbers below we will only get 59 out.
Let's apply this function to the first set of maps:. Compared to the previous maps we made, these have larger areas of valleys, hills, or mountains.
Mountains are often near hills. And because of the way we modified the noise by taking the min , valleys are more common than mountains. If we had taken the max, mountains would be more common than valleys. If we had wanted neither valleys nor mountains more common, we could've taken the average instead of min or max.
We now have a noise modification routine that can take some noise and make some new, smoother noise. Now our maps are even smoother and there are even fewer mountains. I think we've smoothed too much, since we're not seeing mountains near hills very often.
So maybe it's better go back to one level of smoothing in this example. This is a common process with procedural generation: you try something and see whether it looks right, and if not, you change it back or try something else. Side note: smoothing is called a low-pass filter [3] in signal processing. It's sometimes used to remove unwanted noise. Picking a noise function sometimes takes guesswork.
Understanding how noise works and how you can modify it means you can make more educated guesses. In the previous section we chose noise by using random numbers as the output, then smoothing them.
This is a common pattern. You start with a noise function that uses random numbers as parameters. You can then modify existing noise to shape it to your needs. There are lots of other ways to modify noise functions. To some extent it doesn't matter how the noise was made.
Although this is a useful ansatz, other approaches without an underlying function are possible which allow coalescing points with different values, using the idea of spline smoothing Wahba Generalising the radial basis function approach to matrix-valued kernels, alternative ideas are also available e.
The flexibility of the approach is also based on the radial symmetry of each term although not, of course, of the whole expression 1 since its definition essentially depends only on a univariate function.
Most often, radial basis function approximations are used in combination with interpolation , i. When the kernel function in form of the radial basis function is strictly positive definite , the interpolation matrix is a positive definite matrix and non-singular positive definite functions were considered in the classical paper Schoenberg for example. Positive definite functions, and their generalisations conditionally positive definite functions, see below, are closely related to reproducing kernel Hilbert spaces see literature under further reading.
This is the case for. For example. But the polynomials are normally not of very high degree, constant to cubic being typical. The side conditions 4 have to be adjusted to different order conditions when polynomials of degree other than one are used.
Also the geometric conditions centres not being collinear in the case 4 will have to be strengthened accordingly Duchon Such kernels are no longer positive definite as mentioned above, but conditionally positive definite due to the aforementioned side conditions.
More general convergence theory is given for instance in Wu and Schaback , Narcowich, Ward and Wendland An exception is provided by the radial basis functions of compact support described below. A typical case is the multiquadric function, where the interpolation matrix has spectral properties which depend both on the parameter and on the distances of the data-points. They can lead to large condition numbers, and of course the matrix is not sparse. Some preconditioning and iterative methods are to be applied for an early approach see Dyn and Levin The case when multiquadrics are used is very important since they are most often used in applications, other important choices the aforementioned thin-plate splines and exponential functions.
Schempp ed. Zeller ed. Hardy, "Theory and applications of the multiquadric-biharmonic method" Computers Math. Madych, S. Nelson, "Bounds on multivariate polynomials and exponential error estimates for multiquadric interpolation" J. Micchelli, "Interpolation of scattered data: distance matrices and conditionally positive definite functions" Constructive Approx. Powell, "The theory of radial basis function approximation" W.
Powell, "A new iterative method for thin plate spline interpolation in two dimensions" Ann. Wendland, "Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree" Adv. Wu, R. How to Cite This Entry: Radial basis function. Encyclopedia of Mathematics. See original article. Category : TeX done.
0コメント