Signal processing algorithms defined in continuous time/space/frequency are typically implemented by sampling the signal on a discrete grid and converting integrals into sums (and derivatives into differences). Spatial filters are implemented through convolution with a convolution kernel (i.e. weighted sum of neighbors).
There is a huge body of knowledge concerning filtering sampled time-domain signals; time-domain filters are implemented as either finite impulse response filters, where the current output sample is computed as a weighted sum of the previous N input samples; or infinite impulse response filters, where the current output is a weighted sum of the previous inputs and previous outputs. Formally, the discrete time filters are described using the z-transform, which is the discrete-time analog to the Laplace transform. The bilinear transform maps one to the other (c2d
and d2c
in Matlab).
How would you evaluate functions at arbitrary points?
When you need the value of a signal at a point that does not lie directly on your sampling grid, you interpolate its value from nearby points. Interpolation can be as simple as choosing the nearest sample, computing a weighted average of the nearest samples, or fitting an arbitrarily complicated analytic function to the sampled data and evaluating this function at the needed coordinates. Interpolating onto a uniform finer grid is upsampling. If your original (continuous) signal does not contain details (i.e. frequencies) finer than half the sampling grid, then the continuous function can be perfectly reconstructed from the sampled version (the Nyquist-Shannon sampling theorem). For an example of how you can interpolate in 2D, see bilinear interpolation.
In Matlab you can use interp1
or interp2
to interpolate 1D or regularly sampled 2D data (respectively), or griddata
to interpolate from irregularly sampled 2D data.
Would you have a for-loop going through each voxel and computing the corresponding formula?
Yes, exactly.
Matlab saves you from having to do this via explicit for-loops because it is designed to operate on matrices and vectors (i.e. multidimensional arrays). In Matlab this is called "vectorization". Definite integrals can be approximated with sum
, cumsum
, trapz
, cumtrapz
, etc.
I've read the book "Digital Image Processing" by Gonzalez and Woods but I'm still at a loss. I have also read about the Numerical Recipes book series. Would that be the correct way?
Yes, Numerical Recipes would be a great start. It is very practical and covers most of the numerical methods you'll end up needing. (You will find that Matlab already implements everything you need, but Numerical Recipes will provide excellent background.)
I have taken an "algorithms and data structures" class, but I don't see the relation between the material presented there and implementing scientific algorithms.
The material treated in "Algorithms and data structures" courses tends to concentrate on structures like lists, arrays, trees, and graphs containing integers or strings and operations like sorting and selecting: problems for which there is typically a single correct result. When it comes to scientific algorithms, this is only half of the story. The other half concerns methods to estimate real numbers and analytic functions. You'll find this in a course on "Numerical Methods" (or "Numerical Analysis"; like this one - scroll down for the slides): how to estimate special functions, how to estimate integrals and derivatives, etc. Here one of the main tasks is to estimate the accuracy of your result, and one common pattern is to iterate a routine that improves an estimate until it is sufficiently accurate. (You might ask yourself how Matlab knows how to do something as simple as estimate a value of sin(x)
for some x
.)
As a simple example, here is a short script that computes a radon transform of an image in Matlab. The radon transform takes projections of an image over a set of projection angles. Instead of trying to compute the projection along an arbitrary angle, I instead rotate the entire image using imrotate
, so that the projection take is always vertical. Then we can take the projection simply using sum
, since the sum
of a matrix returns a vector containing the sum over each column.
You could write your own imrotate
if you prefer, using interp2
.
%%# Home-made Radon Tranform
%# load a density map (image).
A = phantom;
n_pixels = size(A, 1); %# image width (assume square)
%# At what rotation angles do we want to take projections?
n_thetas = 101;
thetas = linspace(0, 180, n_thetas);
result = zeros(n_thetas, n_pixels);
%# Loop over angles
for ii=1:length(thetas)
theta = thetas(ii);
rotated_image = imrotate(A, theta, 'crop');
result(ii, :) = sum(rotated_image);
end
%# display the result
imagesc(thetas, 1:n_pixels, result.');
xlabel('projection angle [degrees]');
What was once an integral of the density along a ray is now a sum over a column of a discretely sampled image, which was in turn found by interpolating the original image over a transformed coordinate system.
Best Answer
As you already suggest in your question, there is a big difference between implementing a formula, and implementing a calculation routine. Your paper solution is a routine, not a formula.
You needed paper to write down numbers as intermediate results. Your algorithm will need something similar. Algorithms for solving systems of equations typically manipulate a matrix (2D array) of numbers. Put your equation factors into an 2D array, and try to describe the substitions as an operation on that matrix.
When solving this on paper, you take decisions along the way. Eg. You choose to combine the first and third equation and solve that towards R. Then you used the second equation to solve p. This will not always work if there are some 0 factors in your equation. You might need to solve towards another variable, or combine the equations in a different order. The algorithm needs to take into account all these scenarios.
Your problem is 'overdetermined': In general, 2 lines in 3D will not intersect. It will require a 'lucky shot' (special case) for them to intersect. Your paper example happens to have a solution. And it happens to be round numbers: easy to check. What if your first solution for P would be 1.0 and your second solution would be 0.9999999? Would that be a rounding error of the algorithm, or would the problem have no solution?