Calculate an arithmetic expression. If you choose one dimention you have
access to the value stored in the input buffer via the v letter in
expression, to the index of v via letter x and to the size
of the buffer via size. If you choose two dimentions, instead of size
you can access width and height and on top of x you also have the y
for the vertical coordinate and linearized index (computed as y * width +
x). Please be aware that v is a floating point number while the rest of
the variables are integers. One dimension is useful if you have
multidimensional data and want to address only one dimension. Let’s say the
input is two dimensional, 256 pixels wide and you want to fill the
x-coordinate with x for all respective y-coordinates (a gradient in
x-direction). Then you can write expression=”x % 256”. Another example is
the sinc function which you would calculate as expression=”sin(v) / x”
for 1D input. For more complex math or other operations please consider
using OpenCL.
Load an arbitrary OpenCL kernel from filename or
source and execute it on each input. The kernel must accept as
many global float array parameters as connected to the filter and one
additional as an output. For example, to compute the difference between two
images, the kernel would look like:
Rotates images clockwise by an angle around a center
(x, y). When reshape is True, the rotated image is not
cropped, i.e. the output image size can be larger that the input size.
Moreover, this mode makes sure that the original coordinates of the input are
all contained in the output so that it is easier to see the rotation in the
output. Try e.g. rotation with center equal to \((0, 0)\) and
angle \(\pi / 2\).
Cuts input into multiple tiles. The stream contains tiles in a zig-zag
pattern, i.e. the first tile starts at the top left corner of the input goes
on the same row until the end and continues on the first tile of the next
row until the final tile in the lower right corner.
Transformation between polar and cartesian coordinate systems.
When transforming from cartesian to polar coordinates the origin is in the
image center (width / 2, height / 2). When
transforming from polar to cartesian coordinates the origin is in the image
corner (0, 0).
Stitches two images horizontally based on their relative given
shift, which indicates how much is the second image shifted
with respect to the first one, i.e. there is an overlapping region given by
\(first\_width - shift\). First image is inserted to the stitched image
from its left edge and the second image is inserted after the overlapping
region. If shift is negative, the two images are swapped and stitched as
described above with shift made positive.
If you are stitching a 360-degree off-centered tomographic data set and
know the axis of rotation, shift can be computed as \(2axis -
second\_width\) for the case the axis of rotation is greater than half of the
first image. If it is less, then the shift is \(first\_width - 2 axis\).
Moreover, you need to horizontally flip one of the images because this task
expects images which can be stitched directly, without additional needed
transformations.
Stitching requires two inputs. If you want to stitch a 360-degree
off-centered tomographic data set you can use:
How much is second image shifted with respect to the first one. For
example, shift 0 means that both images overlap perfectly and the
stitching doesn’t actually broaden the image. Shift corresponding to
image width makes for a stitched image with twice the width of the
respective images (if they have equal width).
Interpolates incoming data from two compatible streams, i.e. the task
computes \((1 - \alpha) s_1 + \alpha s_2\) where \(s_1\) and
\(s_2\) are the two input streams and \(\alpha\) a blend factor.
\(\alpha\) is \(i / (n - 1)\) for \(n > 1\), \(n\) being
number and \(i\) the current iteration.
Reads two datastreams, the first must provide a 3D stack of images that is
used to correlate individal 2D images from the second datastream. The
number property must contain the expected number of items in the second
stream.
Smoothing control parameter, should be around noise standard deviation
or slightly less. Higher h results in a smoother image but with blurred
features. If it is 0, estimate noise standard deviation and use it as
the parameter value.
Noise standard deviation, improves weights computation. If it is zero,
it is not automatically estimated as opposed to h.
estimate-sigma has to be specified in order to override
sigma value.
Apply Gaussian profile with \(\sigma = \frac{P}{2}\), where \(P\)
is the patch-radius parameter to the weight computation
which decreases the influence of pixels towards the corners of the
patches.
Use a fast version of the algorithm described in [1]. The only
difference in the result from the classical algorithm is that there is
no Gaussian profile used and from the nature of the fast algorithm,
floating point precision errors might occur for large images.
Estimate sigma based on [2], which overrides sigma
parameter value. Only the first image in a sequence is used for
estimation and the estimated sigma is re-used for every consequent
image.
Find large spots with extreme values in an image. First, pixels with values
greater than spot_threshold are added to the mask, then
connected pixels with absolute difference between them and the originally
detected greater than grow-threshold are added to the mask. In
the end, holes are also removed from the mask.
Pixels connected to the ones found by spot-threshold with
absolute difference greater than this threshold are added to the mask.
If the value is 0, it is automatically set to full width at tenth
maximum of the estimated noise standard deviation.
Addressing mode specifies the behavior for pixels falling outside the
original image. See OpenCL sampler_t documentation for more
information. This parameter is used only for automatic noise standard
deviation estimation.
Interpolate masked values in rows of an image. For all pixels equal to one
in the mask, find the closest pixel where mask is zero to the left and right
and linearly interpolate the value in the current pixel based on the found
left and right values. If use-one-sided-gradient is TRUE and
the mask goes to the left or right border of the image and on the other side
there are at least two non-masked pixels \(x_1\) and \(x_2\),
compute the value in the current pixel \(x\) by (in case the mask goes
to the right border, left is analogous) \(f(x) = f(x_2) + (x - x_2) *
(f(x_2) - f(x_1))\). In case use-one-sided-gradient is FALSE
or there is only one valid pixel on one of the borders and all the others
are masked, use that pixel’s value in all the remaining ones.
If TRUE, use two good pixels on one side to compute gradient and fill
the masked values accordingly (for the case the mask spans to the
border). If FALSE, just copy the last “good” pixel value to the masked
values.
Reduces or folds the input stream using a generic OpenCL kernel by loading
an arbitrary kernel from filename or
source. The kernel must accept exactly two global float arrays,
one for the input and one for the output. Additionally a second
finish kernel can be specified which is called once when the
processing finished. This kernel must have two arguments as well, the global
float array and an unsigned integer count. Folding (i.e. setting the initial
data to a known value) is enabled by setting the fold-value.
Here is an OpenCL example how to compute the average:
Name of the kernel that is called at the end after all iterations. Must
have a global float array and an unsigned integer arguments, the first
being the data, the second the iteration counter.
Symmetrical to the slice filter, the stack filter stacks two-dimensional
input. If number is not a divisor of the number of input images, the
last produced stack at index which starts to exceed the number of input
images will contain arbitrary images from the previous iterations.
Stacks input images up to the specified number and then
replaces old images with incoming new ones as they come. The first image is
copied to all positions in the beginning. By default, images in the window
are not ordered, i.e. if e.g. number = 3, then the window will
contain the following input images: (0, 0, 0), (0, 1, 0), (0, 1, 2), (3, 1,
2), (3, 4, 2), (3, 4, 5) and so on. If you want them to appear ordered with
respect to their arrival time, use ordered.
Lays out input images on a quadratic grid. If the number of
input elements is not the square of some integer value, the next higher
number is chosen and the remaining data is blackened.
Turns a three-dimensional image into two-dimensional image by interleaving
the third dimension, i.e. [[[XXX],[YYY],[ZZZ]]] is turned into
[[XYZ],[XYZ],[XYZ]]. This is useful to merge a separate multi-channel RGB
image into a “regular” RGB image that can be shown with cv-show.
This task adds the channels key to the output buffer containing the
original depth of the input buffer.
Compute the Fourier spectrum of input data. dimensions
specifies the dimensionality of the transform, it is independent from the
input dimensions. E.g. if you have 3D input, you can compute a 3D FT, a
batch of 2D FTs of every plane, or a batch of 1D FTs of every row. If you
have 2D input, you may compute a 2D FT or a batch of 1D FTs of every row.
For every dimension, if size is not specified and
auto-zeropadding is True, the input is padded to the next power
of two. If it is False, the output has the same size as the input (via the
Chirp-z transform from [Rabiner et al., 1969]).
Please note that Chirp-z needs to perform 2 padded-size FFTs and pads the
input to the next power of two of double the input size, so it can be
considerably slower than using auto-zeropadding. E.g. if the
input size is 1023x1023 pixels, auto-zeropadding=True pads the input
to 1024x1024 pixels. In the case of auto-zeropadding=False and no
user size specification (see parameters below), Chirp-z pads the input to
2048x2048. On the top of that, it requires two FFTs with the padded
size, so in this case it is eight times slower than using
auto-zeropadding=True (factor of four for the padding in the two
dimensions and the additional factor of two for the two FFTs).
Compute the inverse Fourier of spectral input data (see fft) for details on how the transform works. You may crop the output
by setting the crop-width and crop-height
parameters, otherwise the output has the same size as the input.
Any of ramp, ramp-fromreal, butterworth, faris-byer,
hamming and bh3 (Blackman-Harris-3). The default filter is
ramp-fromreal which computes a correct ramp filter avoiding offset
issues encountered with naive implementations.
Filter vertical stripes. The input and output are in 2D frequency domain.
The filter multiplies horizontal frequencies (for frequency ky=0) with a
Gaussian profile centered at 0 frequency if vertical-sigma is 0.
Otherwise it applies also a vertical Gaussian profile (1 - Gaussian), which
enables filtering of not perfectly vertical stripes, which is useful for
broader stripes and stripes which are not perfectly straight. If
horizontal-sigma is 0, only the vertical Gaussian profile is applied
(i.e. a horizontal stripe is cut out around ky=0). This is useful e.g. for
filtering DMM stripes.
Horizontal filter strength, which is the sigma of the Gaussian. Small
values, e.g. 1e-7 cause only the zero frequency to remain in the
signal, i.e. stronger filtering. Values around 1 are a good starting
point.
Vertical filter strength, which is the sigma of the Gaussian. The larger
the value, the more non-vertical frequencies are removed. Value around 4
is a good starting point.
Filter stripes in 1D along the x-axis. The input and output are in frequency
domain. The filter multiplies the frequencies with an inverse Gaussian
profile centered at 0 frequency. The inversed profile means that the filter
is f(k) = 1 - gauss(k) in order to suppress the low frequencies.
Read a stream of two-dimensional projections and output a stream of
transposed sinograms. numbermust be set to the
number of incoming projections to allocate enough memory.
Computes the backprojection of multiple sinograms in parallel.
Stream multiple sinograms by introducing a stack filter of certain size
before this filter.
A suitable minimum stack size must be specified based on precision mode
Computes the 2D Fourier spectrum of reconstructed image using 1D Fourier
projection of sinogram (fft filter must be applied before). There are no
default values for properties, therefore they should be assigned manually.
Computes and applies a fourier filter to correct phase-shifted data.
Expects frequencies as an input and produces frequencies as an output.
Propagation distance can be specified for both x and y directions together
by the distance parameter or separately by
distance-x and distance-y, which is useful e.g.
when pixel size is not symmetrical. distance may be a list in
which case a multi-distance CTF phase retrieval is performed. In this case
method must be set to ctf_multidistance.
Regularization parameter is log10 of the constant to be added to the
denominator to regularize the singularity at zero frequency: 1/sin(x) ->
1/(sin(x)+10^-RegPar). It is also log10(delta / beta) where the complex
refractive index is delta + beta * 1j.
Parameter for Quasiparticle phase retrieval which defines the width of
the rings to be cropped around the zero crossing of the CTF denominator
in Fourier space.
Typical values in [0.01, 0.1], qp retrieval is rather independent of
cropping width.
Computes \(\alpha A \cdot B + \beta C\) where \(A\), \(B\) and \(C\) are input
streams 0, 1 and 2 respectively. \(A\) must be of size \(m\times k\), \(B\)\(k\times n\) and \(C\)\(m\times n\).
Note
This filter is only available if CLBlast support is available.
Segments a stack of images given a field of labels using the random walk
algorithm described in [3]. The first
input stream must contain three-dimensional image stacks, the second input
stream a label image with the same width and height as the images. Any pixel
value other than zero is treated as a label and used to determine segments
in all directions.
Repeats output of incoming data items. It uses a low-overhead policy to
avoid unnecessary copies. You can expect the data items to be on the device
where the data originated.